Technical Spec

For those already familiar with LLM application tech stacks, this document serves as a shortcut to understand Dify's unique advantages

We adopt transparent policies around product specifications to ensure decisions are made based on complete understanding. Such transparency not only benefits your technical selection, but also promotes deeper comprehension within the community for active contributions.

Project Basics

Established

March 2023

Open Source License

Official R&D Team

Over 10 full-time employees

Community Contributors

Over 120 people

Backend Technology

Python/Flask/PostgreSQL

Frontend Technology

Next.js

Codebase Size

Over 130,000 lines

Release Frequency

Average once per week

Technical Features

LLM Inference Engines

Dify Runtime (LangChain removed since v0.4)

Commercial Models Supported

10+, including OpenAI and Anthropic Onboard new mainstream models within 48 hours

MaaS Vendor Supported

7, Hugging Face, Replicate, AWS Bedrock, NVIDIA, GroqCloud, together.ai,, OpenRouter

Local Model Inference Runtimes Supported

6, Xoribits (recommended), OpenLLM, LocalAI, ChatGLM,Ollama, NVIDIA TIS

OpenAI Interface Standard Model Integration Supported

Multimodal Capabilities

ASR Models

Rich-text models up to GPT-4V specs

Built-in App Types

Text generation, Conversational, Agent, Workflow, Group(Q2 2024)

Prompt-as-a-Service Orchestration

Visual orchestration interface widely praised, modify Prompts and preview effects in one place.

Orchestration Modes

  • Simple orchestration

  • Assistant orchestration

  • Flow orchestration

  • Multi-Agent orchestration(Q2 2024)

Prompt Variable Types

  • String

  • Radio enum

  • External API

  • File (Q2 2024)

Agentic Workflow Features

Industry-leading visual workflow orchestration interface, live-editing node debugging, modular DSL, and native code runtime, designed for building more complex, reliable, and stable LLM applications.

Supported Nodes

  • LLM

  • Knowledge Retrieval

  • Question Classifier

  • IF/ELSE

  • CODE

  • Template

  • HTTP Request

  • Tool

RAG Features

Industry-first visual knowledge base management interface, supporting snippet previews and recall testing.

Indexing Methods

  • Keywords

  • Text vectors

  • LLM-assisted question-snippet model

Retrieval Methods

  • Keywords

  • Text similarity matching

  • Hybrid Search

  • N choose 1

  • Multi-path recall

Recall Optimization

  • Re-rank models

ETL Capabilities

Automated cleaning for TXT, Markdown, PDF, HTML, DOC, CSV formats. Unstructured service enables maximum support.

Sync Notion docs as knowledge bases.

Vector Databases Supported

Qdrant (recommended), Weaviate, Zilliz

Agent Technologies

ReAct, Function Call.

Tooling Support

  • Invoke OpenAI Plugin standard tools

  • Directly load OpenAPI Specification APIs as tools

Built-in Tools

  • 30+ tools(As of Q1 2024)

Logging

Supported, annotations based on logs

Annotation Reply

Based on human-annotated Q&As, used for similarity-based replies. Exportable as data format for model fine-tuning.

Content Moderation

OpenAI Moderation or external APIs

Team Collaboration

Workspaces, multi-member management

API Specs

RESTful, most features covered

Deployment Methods

Docker, Helm

Last updated