Skip to main content
Let’s look back the upgrades we’ve made for our email assistants.
  • Learned to Read: It can search a Knowledge Base
  • Learned to Choose: It uses Conditions to make decisions
  • Learned to Multitask: It handles multiple questions via Iteration
  • Learned to Use Tools: It can access the Internet via Google Search
You might have noticed that our workflow is no longer just a straight line (Step 1 → Step 2 → Step 3). It’s becoming a system that analyzes, judges, and calls upon different abilities to solve problems. This advanced pattern is what we call an Agentic Workflow.

Agentic Workflow

An Agentic Workflow isn’t just Input > Process > Output. It involves thinking, planning, using tools, and adjusting based on results. It transforms the AI from a simple Executor (who just follows orders) into an intelligent Agent (who solves problems autonomously).

Agent Strategies

To make Agents work smarter, researchers designed Strategies—think of these as different modes of thinking that guide the Agent.
  • ReAct (Reason + Act) The Think, then Do approach. The Agent thinks (What should I do?), acts (calls a tool), observes the result, and then thinks again. It loops until the job is done.
  • Plan-and-Execute Make a full plan first, then do it step-by-step.
  • Chain of Thought (CoT) Writing out the reasoning steps before giving an answer to improve accuracy.
  • Self-Correction Checking its own work and fixing mistakes.
  • Memory Equipping the Agent with short-term or long-term memory allows it to recall previous conversations or key details, enabling more coherent and personalized responses.
In Lesson 7, we manually built a Brain using Knowledge Retrieval -> LLM to Decide-> If/Else -> Search. It worked, but it was complicated to build. Is there a simpler way? Yes, and here it is.

Agent Node

The Agent Node is a highly packaged intelligent unit. You just need to set a Goal for it through instructions and provide the Tools it might need. Then, it can autonomously think, plan, select, and call tools internally (using the selected Agent Strategy, such as ReAct, and the model’s Function Calling capability) until it completes your set goal. In Dify, this greatly simplifies the process of building complex Agentic Workflows.

Hands-on 1: Build with Agent Node

Our goal is to replace that complex manual logic inside our Iteration loop with a single, smart Agent Node.
1

Clean up the Iteration

Go to the sub-process of the Iteration. Keep knowledge retrieval node, and delete other nodes in side it.
Iteration
2

Add the Agent Node

Add an Agent node right after the Knowledge Retrieval node.
Add Agent Node
3

Install Agent Strategy

Since we haven’t used this before, we need to install a strategy from the Marketplace.Click the Agent node. In the right panel, look for Agent Strategy. Click Find more in Marketplace.
Search Agent Strategy
4

Pick an Agent Strategy

In the Marketplace, find Dify Agent Strategy and install it.
Choose Agent Strategy
5

Select ReAct

Back in your workflow (refresh if needed), select ReAct under Agent Strategy.
Select ReAct
Why ReAct here?ReAct (Reason + Act) is a strategy that mimics human problem-solving using a Think → Do → Check loop.
  1. Reason: The Agent thinks, What should I do next? (e.g., Check the Knowledge Base).
  2. Act: It performs the action.
  3. Observe: It checks the result. If the answer isn’t found, it repeats the cycle (e.g., Okay, I need to search Google).
This thinking-while-doing approach is perfect for complex tasks where the next step depends on the previous result.
6

Choose a Model

ReAct is a thinking strategy, but to actually pull off the action part, AI needs the right “physical” skills which is called Function Calling. Select a model that supports Function Calling. Here, we choose gpt-5.Why Function Calling?One of the core capabilities of an Agent Node is to autonomously call tools. Function Calling is the key technology that allows the model to understand when and how to use the tools you provide (like Google Search).If the model doesn’t support this feature, the Agent cannot effectively interact with tools and loses most of its autonomous decision-making capabilities.
Choose a Model
7

Add Tool

Click Agent node. Click plus(+) icon in tool list and select Google Search.
Add Tool
8

Add Instructions

We need to tell the Agent specifically what to do with the tools and context we are giving it. Use and paste the instructions into the Instruction field:
Goal: Answer user questions about Dify products.

Steps:
1. I have provided a relevant internal knowledge base retrieval result. First, judge if this result can fully answer the user's questions.
2. If the context clearly answers it, generate the final answer based on the context.
3. If the answer is insufficient or irrelevant, use the Google Search tool to find the latest information and generate the answer based on search results.

Requirement: Keep the final answer concise and accurate.
Add Instructions
9

Context and Query

Your configuration here is crucial for the Agent to see the data.
  • Context: Select Knowledge Retrieval / (x) result Array[Object] from the Knowledge Retrieval node (This passes the knowledge base content to the Agent).
  • Query: Select Iteration/{x} item from the Iteration node.
Why item instead of the original email_content?We used the Parameter Extractor to extract a list of questions (question_list) from the email_content. The Iteration node is processing this list one by one, where item represents the specific question currently being handled.Using item as the query input allows Agent to focus on the current task, improving the accuracy of decision-making and actions.
Context and Query
10

Set Iteration Output

Click Agent/{x}text String as the output variables.
Set Iteration Output
🎉 The Iteration node is now upgraded.
Since the Iteration node generates a list of answers, we need to stitch them back together into one email.

Hands-on 2: Final Assembly

1

The Final Editor (LLM)

  1. Add an LLM node after the Iteration node.
  2. Click on it and add prompt into the system. Feel free to check on the prompt below, or edit by yourself.
    Combine all answers for the original email.
    Write a complete, clear, and friendly reply to the customer.
    Signature: Anne
    
  3. Add user message to replace answers, email content and customer name with variables respectively. Here’s how the LLM looks like right now.
    Final LLM
2

Add Output Node

Set the output variable to the LLM’s text and name it email_reply.
Add Output Node
Here comes the final workflow.
Final Workflow
Click Test Run. Ask a mix of questions. Watch how the Agent Node autonomously decides when to use the context and when to use Google search.

Mini Challenge

  1. Could we use an Agent Node to replace the entire Iteration loop? How would you design the prompt to handle a list of questions all at once?
  2. What other information could you feed into the Agent’s Context field to help it make better decisions?