An Agent Strategy Plugin helps an LLM carry out tasks like reasoning or decision-making, including choosing and calling tools, as well as handling results. This allows the system to address problems more autonomously.
Below, you’ll see how to develop a plugin that supports Function Calling to automatically fetch the current time.
Run the following command to create a development template for your Agent plugin:
dify plugin init
Follow the on-screen prompts and refer to the sample comments for guidance.
➜ ./dify-plugin-darwin-arm64 plugin init ─╯Edit profile of the pluginPlugin name (press Enter to next step): # Enter the plugin nameAuthor (press Enter to next step): # Enter the plugin authorDescription (press Enter to next step): # Enter the plugin description---Select the language you want to use for plugin development, and press Enter to continue,BTW, you need Python 3.12+ to develop the Plugin if you choose Python.-> python # Select Python environment go (not supported yet)---Based on the ability you want to extend, we have divided the Plugin into four types: Tool, Model, Extension, and Agent Strategy.- Tool: It's a tool provider, but not only limited to tools, you can implement an endpoint there, for example, you need both Sending Message and Receiving Message if you are- Model: Just a model provider, extending others is not allowed.- Extension: Other times, you may only need a simple http service to extend the functionalities, Extension is the right choice for you.- Agent Strategy: Implement your own logics here, just by focusing on Agent itselfWhat's more, we have provided the template for you, you can choose one of them below: tool-> agent-strategy # Select Agent strategy template llm text-embedding---Configure the permissions of the plugin, use up and down to navigate, tab to select, after selection, press enter to finishBackwards Invocation:Tools: Enabled: [✔] You can invoke tools inside Dify if it's enabled # Enabled by defaultModels: Enabled: [✔] You can invoke models inside Dify if it's enabled # Enabled by default LLM: [✔] You can invoke LLM models inside Dify if it's enabled # Enabled by default → Text Embedding: [✘] You can invoke text embedding models inside Dify if it's enabled Rerank: [✘] You can invoke rerank models inside Dify if it's enabled TTS: [✘] You can invoke TTS models inside Dify if it's enabled Speech2Text: [✘] You can invoke speech2text models inside Dify if it's enabled Moderation: [✘] You can invoke moderation models inside Dify if it's enabledApps: Enabled: [✘] Ability to invoke apps like BasicChat/ChatFlow/Agent/Workflow etc.Resources:Storage: Enabled: [✘] Persistence storage for the plugin Size: N/A The maximum size of the storageEndpoints: Enabled: [✘] Ability to register endpoints
After initialization, you’ll get a folder containing all the resources needed for plugin development. Familiarizing yourself with the overall structure of an Agent Strategy Plugin will streamline the development process:
To build an Agent plugin, start by specifying the necessary parameters in strategies/basic_agent.yaml. These parameters define the plugin’s core features, such as calling an LLM or using tools.
We recommend including the following four parameters first:
model: The large language model to call (e.g., GPT-4, GPT-4o-mini).
tools: A list of tools that enhance your plugin’s functionality.
query: The user input or prompt content sent to the model.
maximum_iterations: The maximum iteration count to prevent excessive computation.
Example Code:
identity:name: basic_agent # the name of the agent_strategyauthor: novice # the author of the agent_strategy label:en_US: BasicAgent # the engilish label of the agent_strategydescription:en_US: BasicAgent # the english description of the agent_strategyparameters:-name: model # the name of the model parametertype: model-selector # model-typescope: tool-call&llm# the scope of the parameterrequired:true label:en_US: Modelzh_Hans: 模型pt_BR: Model-name: tools # the name of the tools parametertype: array[tools]# the type of tool parameterrequired:true label:en_US: Tools listzh_Hans: 工具列表pt_BR: Tools list-name: query # the name of the query parametertype: string # the type of query parameterrequired:true label:en_US: Queryzh_Hans: 查询pt_BR: Query-name: maximum_iterationstype: numberrequired:falsedefault:5 label:en_US: Maxium Iterationszh_Hans: 最大迭代次数pt_BR: Maxium Iterationsmax:50# if you set the max and min value, the display of the parameter will be a slidermin:1extra: python:source: strategies/basic_agent.py
Once you’ve configured these parameters, the plugin will automatically generate a user-friendly interface so you can easily manage them:
After users fill out these basic fields, your plugin needs to process the submitted parameters. In strategies/basic_agent.py, define a parameter class for the Agent, then retrieve and apply these parameters in your logic.
In an Agent Strategy Plugin, invoking the model is central to the workflow. You can invoke an LLM efficiently using session.model.llm.invoke() from the SDK, handling text generation, dialogue, and so forth.
If you want the LLM handle tools, ensure it outputs structured parameters to match a tool’s interface. In other words, the LLM must produce input arguments that the tool can accept based on the user’s instructions.
To view the complete functionality implementation, please refer to the Example Code for model invocation.
This code achieves the following functionality: after a user inputs a command, the Agent strategy plugin automatically calls the LLM, constructs the necessary parameters for tool invocation based on the generated results, and enables the model to flexibly dispatch integrated tools to efficiently complete complex tasks.
Adding Memory to your Agent plugin allows the model to remember previous conversations, making interactions more natural and effective. With memory enabled, the model can maintain context and provide more relevant responses.
Steps:
Configure Memory Functionality
Add the history-messages feature to the Agent plugin’s YAML configuration file strategies/agent.yaml:
identity:name: basic_agent # Agent strategy nameauthor: novice # Author label:en_US: BasicAgent # English labeldescription:en_US: BasicAgent # English descriptionfeatures:- history-messages # Enable history messages feature...
Enable Memory Settings
After modifying the plugin configuration and restarting, you will see the Memory toggle. Click the toggle button on the right to enable memory.
Once enabled, you can adjust the memory window size using the slider, which determines how many previous conversation turns the model can “remember”.
Debug History Messages
Add the following code to check the history messages:
history_messages: []history_messages: [UserPromptMessage(role=<PromptMessageRole.USER: 'user'>, content='hello, my name is novice', name=None), AssistantPromptMessage(role=<PromptMessageRole.ASSISTANT: 'assistant'>, content='Hello, Novice! How can I assist you today?', name=None, tool_calls=[])]
Integrate History Messages into Model Calls
Update the model call to incorporate conversation history with the current query:
After implementing Memory, the model can respond based on conversation history. In the example below, the model successfully remembers the user’s name mentioned in previous conversation.
If you’d like the LLM itself to generate the parameters needed for tool calls, you can do so by combining the model’s output with your tool-calling code.
tool_instances =({tool.identity.name: tool for tool in params.tools}if params.tools else{})for tool_call_id, tool_call_name, tool_call_args in tool_calls: tool_instance = tool_instances[tool_call_name] self.session.tool.invoke( provider_type=ToolProviderType.BUILT_IN, provider=tool_instance.identity.provider, tool_name=tool_instance.identity.name, parameters={**tool_instance.runtime_parameters,**tool_call_args},)
With this in place, your Agent Strategy Plugin can automatically perform Function Calling—for instance, retrieving the current time.
Often, multiple steps are necessary to complete a complex task in an Agent Strategy Plugin. It’s crucial for developers to track each step’s results, analyze the decision process, and optimize strategy. Using create_log_message and finish_log_message from the SDK, you can log real-time states before and after calls, aiding in quick problem diagnosis.
For example:
Log a “starting model call” message before calling the model, clarifying the task’s execution progress.
Log a “call succeeded” message once the model responds, ensuring the model’s output can be traced end to end.
After finalizing the plugin’s declaration file and implementation code, run python -m main in the plugin directory to restart it. Next, confirm the plugin runs correctly. Dify offers remote debugging—go to “Plugin Management” to obtain your debug key and remote server address.
Back in your plugin project, copy .env.example to .env and insert the relevant remote server and debug key info.
Complex tasks often need multiple rounds of thinking and tool calls, typically repeating model invoke → tool use until the task ends or a maximum iteration limit is reached. Managing prompts effectively is crucial in this process. Check out the complete Function Calling implementation for a standardized approach to letting models call external tools and handle their outputs.