Reverse Model Request refers to the plugin’s ability to make reverse requests to LLM capabilities within Dify, including all model types and features on the platform, such as TTS, Rerank, etc.
Note that requesting models requires passing a ModelConfig type parameter. Its structure can be referenced in Common Specification Definitions, and this structure will have slight differences for different types of models.
For example, for LLM type models, it needs to include completion_params and mode parameters. You can manually build this structure or use model-selector type parameters or configuration.
It’s not recommended to manually build LLMModelConfig. Instead, allow users to select their desired model in the UI. In this case, you can modify the tool’s parameter list by adding a model parameter according to the following configuration:
Copy
identity: name: llm author: Dify label: en_US: LLM zh_Hans: LLM pt_BR: LLMdescription: human: en_US: A tool for invoking a large language model zh_Hans: 用于调用大型语言模型的工具 pt_BR: A tool for invoking a large language model llm: A tool for invoking a large language modelparameters: - name: prompt type: string required: true label: en_US: Prompt string zh_Hans: 提示字符串 pt_BR: Prompt string human_description: en_US: used for searching zh_Hans: 用于搜索网页内容 pt_BR: used for searching llm_description: key words for searching form: llm - name: model type: model-selector scope: llm required: true label: en_US: Model zh_Hans: 使用的模型 pt_BR: Model human_description: en_US: Model zh_Hans: 使用的模型 pt_BR: Model llm_description: which Model to invoke form: formextra: python: source: tools/llm.py
Note that in this example, the model’s scope is specified as llm, so users can only select llm type parameters. This allows you to modify the above example code as follows:
Copy
from collections.abc import Generatorfrom typing import Anyfrom dify_plugin import Toolfrom dify_plugin.entities.model.llm import LLMModelConfigfrom dify_plugin.entities.tool import ToolInvokeMessagefrom dify_plugin.entities.model.message import SystemPromptMessage, UserPromptMessageclass LLMTool(Tool): def _invoke(self, tool_parameters: dict[str, Any]) -> Generator[ToolInvokeMessage]: response = self.session.model.llm.invoke( model_config=tool_parameters.get('model'), prompt_messages=[ SystemPromptMessage( content='you are a helpful assistant' ), UserPromptMessage( content=tool_parameters.get('query') ) ], stream=True ) for chunk in response: if chunk.delta.message: assert isinstance(chunk.delta.message.content, str) yield self.create_text_message(text=chunk.delta.message.content)
Note: The bytes stream returned by the TTS endpoint is an mp3 audio byte stream, with each iteration returning a complete audio. If you want to perform more in-depth processing tasks, please select an appropriate library.