.env file.
Common Variables
These URL variables configure the addresses of Dify’s various services. For single-domain deployments behind Nginx (the default Docker Compose setup), these can be left empty—the system auto-detects from the incoming request. Configure them when using custom domains, split-domain deployments, or a reverse proxy.CONSOLE_API_URL
Default: (empty) The public URL of Dify’s backend API. Set this if you use OAuth login (GitHub, Google), Notion integration, or any plugin that requires OAuth—these features need an absolute callback URL to redirect users back after authorization. Also determines whether secure (HTTPS-only) cookies are used. Example:https://api.console.dify.ai
CONSOLE_WEB_URL
Default: (empty) The public URL of Dify’s console frontend. Used to build links in all system emails (invitations, password resets, notifications) and to redirect users back to the console after OAuth login. Also serves as the default CORS allowed origin ifCONSOLE_CORS_ALLOW_ORIGINS is not set.
If empty, email links will be broken—even in single-domain setups, set this if you use email features.
Example: https://console.dify.ai
SERVICE_API_URL
Default: (empty) The API Base URL shown to developers in the Dify console—the URL they copy into their code to call the Dify API. If empty, auto-detects from the current request (e.g.,http://localhost/v1). Set this to ensure a consistent URL when your server is accessible via multiple addresses.
Example: https://api.dify.ai
APP_API_URL
Default: (empty) The backend API URL for the WebApp frontend (published apps). This variable is only used by the web frontend container, not the Python backend. If empty, the Docker image defaults tohttp://127.0.0.1:5001.
Example: https://api.app.dify.ai
APP_WEB_URL
Default: (empty) The public URL where published WebApps are accessible. Required for the Human Input node in workflows—form links in email notifications are built as{APP_WEB_URL}/form/{token}. If empty, Human Input email delivery will not include valid form links.
Example: https://app.dify.ai
TRIGGER_URL
Default:http://localhost
The publicly accessible URL for webhook and plugin trigger endpoints. External systems use this address to invoke your workflows. Dify builds trigger callback URLs like {TRIGGER_URL}/triggers/webhook/{id} and displays them in the console.
For triggers to work from external systems, this must point to a public domain or IP address they can reach.
FILES_URL
Default: (empty; falls back toCONSOLE_API_URL)
The base URL for file preview and download links. Dify generates signed, time-limited URLs for all files (uploaded documents, tool outputs, workspace logos) and serves them to the frontend and multi-modal models.
Set this if you use file processing plugins, or if you want file URLs on a dedicated domain. If both FILES_URL and CONSOLE_API_URL are empty, file previews will not work.
Example: https://upload.example.com or http://<your-ip>:5001
INTERNAL_FILES_URL
Default: (empty; falls back toFILES_URL)
The file access URL used for communication between services inside the Docker network (e.g., plugin daemon, PDF/Word extractors). These internal services may not be able to reach the external FILES_URL if it routes through Nginx or a public domain.
If empty, internal services use FILES_URL. Set this when internal services can’t reach the external URL.
Example: http://api:5001
FILES_ACCESS_TIMEOUT
Default:300 (5 minutes)
How long signed file URLs remain valid, in seconds. After this time, the URL is rejected and the file must be re-requested. Increase for long-running processes; decrease for tighter security.
System Encoding
| Variable | Default | Description |
|---|---|---|
LANG | C.UTF-8 | System locale setting. Ensures UTF-8 encoding. |
LC_ALL | C.UTF-8 | Locale override for all categories. |
PYTHONIOENCODING | utf-8 | Python I/O encoding. |
UV_CACHE_DIR | /tmp/.uv-cache | UV package manager cache directory. Avoids permission issues with non-existent home directories. |
Server Configuration
Logging
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL | INFO | Minimum log severity. Controls what gets logged across all handlers (file + console). Levels from least to most severe: DEBUG, INFO, WARNING, ERROR, CRITICAL. |
LOG_OUTPUT_FORMAT | text | text produces human-readable lines with timestamp, level, thread, and trace ID. json produces structured JSON for log aggregation tools (ELK, Datadog, etc.). |
LOG_FILE | /app/logs/server.log | Log file path. When set, enables file-based logging with automatic rotation. The directory is created automatically. When empty, logs only go to console. |
LOG_FILE_MAX_SIZE | 20 | Maximum log file size in MB before rotation. When exceeded, the active file is renamed to .1 and a new file is started. |
LOG_FILE_BACKUP_COUNT | 5 | Number of rotated log files to keep. With defaults, at most 6 files exist: the active file plus 5 backups. |
LOG_DATEFORMAT | %Y-%m-%d %H:%M:%S | Timestamp format for text-format logs (strftime codes). Ignored by JSON format. |
LOG_TZ | UTC | Timezone for log timestamps (pytz format, e.g., Asia/Shanghai). Only applies to text format—JSON always uses UTC. Also sets Celery’s task scheduling timezone. |
General
| Variable | Default | Description |
|---|---|---|
DEBUG | false | Enables verbose logging: workflow node inputs/outputs, tool execution details, full LLM prompts and responses, and app startup timing. Useful for local development; not recommended for production as it may expose sensitive data in logs. |
FLASK_DEBUG | false | Standard Flask debug mode flag. Not actively used by Dify—DEBUG is the primary control. |
ENABLE_REQUEST_LOGGING | false | Logs a compact access line (METHOD PATH STATUS DURATION TRACE_ID) for every HTTP request. When LOG_LEVEL is also set to DEBUG, additionally logs full request and response bodies as JSON. |
DEPLOY_ENV | PRODUCTION | Tags monitoring data in Sentry and OpenTelemetry so you can filter errors and traces by environment. Also sent as the X-Env response header. Does not change application behavior. |
MIGRATION_ENABLED | true | When true, runs database schema migrations (flask upgrade-db) automatically on container startup. Docker only. Set to false if you run migrations separately. For source code launches, run flask db upgrade manually. |
CHECK_UPDATE_URL | https://updates.dify.ai | The console checks this URL for newer Dify versions. Set to empty to disable—useful for air-gapped environments or to prevent external HTTP calls. |
OPENAI_API_BASE | https://api.openai.com/v1 | Legacy variable. Not actively used by Dify’s own code. May be picked up by the OpenAI Python SDK if present in the environment. |
SECRET_KEY
Default: (pre-filled in.env.example; must be replaced for production)
Used for session cookie signing, JWT authentication tokens, file URL signatures (HMAC-SHA256), and encrypting third-party OAuth credentials (AES-256). Generate a strong key before first launch:
INIT_PASSWORD
Default: (empty) Optional security gate for first-time setup. When set, the/install page requires this password before the admin account can be created—preventing unauthorized setup if your server is exposed. Once setup is complete, this variable has no further effect. Maximum length: 30 characters.
Token & Request Limits
| Variable | Default | Description |
|---|---|---|
ACCESS_TOKEN_EXPIRE_MINUTES | 60 | How long a login session’s access token stays valid (in minutes). When it expires, the browser silently refreshes it using the refresh token—users are not logged out. |
REFRESH_TOKEN_EXPIRE_DAYS | 30 | How long a user can stay logged in without re-entering credentials (in days). If the user doesn’t visit within this period, they must log in again. |
APP_MAX_EXECUTION_TIME | 1200 | Maximum time (in seconds) an app execution can run before being terminated. Works alongside WORKFLOW_MAX_EXECUTION_TIME—both enforce the same default of 20 minutes, but this one applies at the app queue level while the other applies at the workflow engine level. Increase both if your workflows need more time. |
APP_DEFAULT_ACTIVE_REQUESTS | 0 | Default concurrent request limit per app, used when an app doesn’t have a custom limit set in the UI. 0 means unlimited. The effective limit is the smaller of this and APP_MAX_ACTIVE_REQUESTS. |
APP_MAX_ACTIVE_REQUESTS | 0 | Global ceiling for concurrent requests per app. Overrides per-app settings if they exceed this value. 0 means unlimited. |
Container Startup Configuration
Only effective when starting with Docker image or Docker Compose.| Variable | Default | Description |
|---|---|---|
DIFY_BIND_ADDRESS | 0.0.0.0 | Network interface the API server binds to. 0.0.0.0 listens on all interfaces; set to 127.0.0.1 to restrict to localhost only. |
DIFY_PORT | 5001 | Port the API server listens on. |
SERVER_WORKER_AMOUNT | 1 | Number of Gunicorn worker processes. With gevent (default), each worker handles multiple concurrent connections via greenlets, so 1 is usually sufficient. For sync workers, use (2 x CPU cores) + 1. Reference. |
SERVER_WORKER_CLASS | gevent | Gunicorn worker type. Gevent provides lightweight async concurrency. Changing this breaks psycopg2 and gRPC patching—it is strongly discouraged. |
SERVER_WORKER_CONNECTIONS | 10 | Maximum concurrent connections per worker. Only applies to async workers (gevent). If you experience connection rejections or slow responses under load, try increasing this value. |
GUNICORN_TIMEOUT | 360 | If a worker doesn’t respond within this many seconds, Gunicorn kills and restarts it. Set to 360 (6 minutes) to support long-lived SSE connections used for streaming LLM responses. |
CELERY_WORKER_CLASS | (empty; defaults to gevent) | Celery worker type. Same gevent patching requirements as SERVER_WORKER_CLASS—it is strongly discouraged to change. |
CELERY_WORKER_AMOUNT | (empty; defaults to 1) | Number of Celery worker processes. Only used when autoscaling is disabled. |
CELERY_AUTO_SCALE | false | Enable dynamic autoscaling. When enabled, Celery monitors queue depth and spawns/kills workers between CELERY_MIN_WORKERS and CELERY_MAX_WORKERS. |
CELERY_MAX_WORKERS | (empty; defaults to CPU count) | Maximum workers when autoscaling is enabled. |
CELERY_MIN_WORKERS | (empty; defaults to 1) | Minimum workers when autoscaling is enabled. |
API Tool Configuration
| Variable | Default | Description |
|---|---|---|
API_TOOL_DEFAULT_CONNECT_TIMEOUT | 10 | Maximum time (in seconds) to wait for establishing a TCP connection when API Tool nodes call external APIs. |
API_TOOL_DEFAULT_READ_TIMEOUT | 60 | Maximum time (in seconds) to wait for receiving response data from external APIs called by API Tool nodes. |
Database Configuration
The database uses PostgreSQL by default. OceanBase, MySQL, and seekdb are also supported.| Variable | Default | Description |
|---|---|---|
DB_TYPE | postgresql | Database type. Supported values: postgresql, mysql, oceanbase, seekdb. MySQL-compatible databases like TiDB can use mysql. |
DB_USERNAME | postgres | Database username. URL-encoded in the connection string, so special characters are safe to use. |
DB_PASSWORD | difyai123456 | Database password. URL-encoded in the connection string, so characters like @, :, % are safe to use. |
DB_HOST | db_postgres | Database server hostname. |
DB_PORT | 5432 | Database server port. If using MySQL, set this to 3306. |
DB_DATABASE | dify | Database name. |
Connection Pool
These control how Dify manages its pool of database connections. The defaults work well for most deployments.| Variable | Default | Description |
|---|---|---|
SQLALCHEMY_POOL_SIZE | 30 | Number of persistent connections kept in the pool. |
SQLALCHEMY_MAX_OVERFLOW | 10 | Additional temporary connections allowed when the pool is full. With default settings, up to 40 connections (30 + 10) can exist simultaneously. |
SQLALCHEMY_POOL_RECYCLE | 3600 | Recycle connections after this many seconds to prevent stale connections. |
SQLALCHEMY_POOL_TIMEOUT | 30 | How long to wait for a connection when the pool is exhausted. Requests fail with a timeout error if no connection frees up in time. |
SQLALCHEMY_POOL_PRE_PING | false | Test each connection with a lightweight query before using it. Prevents “connection lost” errors but adds slight latency. Recommended for production with unreliable networks. |
SQLALCHEMY_POOL_USE_LIFO | false | Reuse the most recently returned connection (LIFO) instead of rotating evenly (FIFO). LIFO keeps fewer connections “warm” and can reduce overhead. |
SQLALCHEMY_ECHO | false | Print all SQL statements to logs. Useful for debugging query issues. |
PostgreSQL Performance Tuning
These are passed as startup arguments to the PostgreSQL container—they configure the database server, not the Dify application.| Variable | Default | Description |
|---|---|---|
POSTGRES_MAX_CONNECTIONS | 100 | Maximum number of database connections. Reference |
POSTGRES_SHARED_BUFFERS | 128MB | Shared memory for buffers. Recommended: 25% of available memory. Reference |
POSTGRES_WORK_MEM | 4MB | Memory per database worker for working space. Reference |
POSTGRES_MAINTENANCE_WORK_MEM | 64MB | Memory reserved for maintenance activities. Reference |
POSTGRES_EFFECTIVE_CACHE_SIZE | 4096MB | Planner’s assumption about effective cache size. Reference |
POSTGRES_STATEMENT_TIMEOUT | 0 | Max statement duration before termination (ms). 0 means no timeout. Reference |
POSTGRES_IDLE_IN_TRANSACTION_SESSION_TIMEOUT | 0 | Max idle-in-transaction session duration (ms). 0 means no timeout. Reference |
MySQL Performance Tuning
These are passed as startup arguments to the MySQL container—they configure the database server, not the Dify application.| Variable | Default | Description |
|---|---|---|
MYSQL_MAX_CONNECTIONS | 1000 | Maximum number of MySQL connections. |
MYSQL_INNODB_BUFFER_POOL_SIZE | 512M | InnoDB buffer pool size. Recommended: 70-80% of available memory for dedicated MySQL server. Reference |
MYSQL_INNODB_LOG_FILE_SIZE | 128M | InnoDB log file size. Reference |
MYSQL_INNODB_FLUSH_LOG_AT_TRX_COMMIT | 2 | InnoDB flush log at transaction commit. Options: 0 (no flush), 1 (flush and sync), 2 (flush to OS cache). Reference |
Redis Configuration
Configure these to connect Dify to your Redis instance. Dify supports three deployment modes: standalone (default), Sentinel, and Cluster.| Variable | Default | Description |
|---|---|---|
REDIS_HOST | redis | Redis server hostname. Only used in standalone mode; ignored when Sentinel or Cluster mode is enabled. |
REDIS_PORT | 6379 | Redis server port. Only used in standalone mode. |
REDIS_USERNAME | (empty) | Redis 6.0+ ACL username. Applies to all modes (standalone, Sentinel, Cluster). |
REDIS_PASSWORD | difyai123456 | Redis authentication password. For Cluster mode, use REDIS_CLUSTERS_PASSWORD instead. |
REDIS_DB | 0 | Redis database number (0–15). Only applies to standalone and Sentinel modes. Make sure this doesn’t collide with Celery’s database (configured in CELERY_BROKER_URL; default is DB 1). |
REDIS_USE_SSL | false | Enable SSL/TLS for the Redis connection. Does not automatically apply to Sentinel protocol. |
REDIS_MAX_CONNECTIONS | (empty) | Maximum connections in the Redis pool. Leave unset for the library default. Set this to match your Redis server’s maxclients if needed. |
Redis SSL Configuration
Only applies whenREDIS_USE_SSL=true. These same settings are also used by the Celery broker when its URL uses the rediss:// scheme.
| Variable | Default | Description |
|---|---|---|
REDIS_SSL_CERT_REQS | CERT_NONE | Certificate verification level: CERT_NONE (no verification), CERT_OPTIONAL, or CERT_REQUIRED (full verification). |
REDIS_SSL_CA_CERTS | (empty) | Path to CA certificate file for verifying the Redis server. |
REDIS_SSL_CERTFILE | (empty) | Path to client certificate for mutual TLS authentication. |
REDIS_SSL_KEYFILE | (empty) | Path to client private key for mutual TLS authentication. |
Redis Sentinel Mode
Sentinel provides automatic master discovery and failover for high availability. Mutually exclusive with Cluster mode.| Variable | Default | Description |
|---|---|---|
REDIS_USE_SENTINEL | false | Enable Redis Sentinel mode. When enabled, REDIS_HOST/REDIS_PORT are ignored; Dify connects to Sentinel nodes instead and asks for the current master. |
REDIS_SENTINELS | (empty) | Sentinel node addresses. Format: <ip1>:<port1>,<ip2>:<port2>,<ip3>:<port3>. These are the Sentinel instances, not the Redis servers. |
REDIS_SENTINEL_SERVICE_NAME | (empty) | The logical service name Sentinel monitors (configured in sentinel.conf). Dify calls master_for(service_name) to discover the current master. |
REDIS_SENTINEL_USERNAME | (empty) | Username for authenticating with Sentinel nodes. Separate from REDIS_USERNAME, which authenticates with the Redis master/replicas. |
REDIS_SENTINEL_PASSWORD | (empty) | Password for authenticating with Sentinel nodes. Separate from REDIS_PASSWORD. |
REDIS_SENTINEL_SOCKET_TIMEOUT | 0.1 | Socket timeout (in seconds) for communicating with Sentinel nodes. Default 0.1s assumes fast local network. For cloud/WAN deployments, increase to 1.0–5.0s to prevent intermittent timeouts. |
Redis Cluster Mode
Cluster mode provides automatic sharding across multiple Redis nodes. Mutually exclusive with Sentinel mode.| Variable | Default | Description |
|---|---|---|
REDIS_USE_CLUSTERS | false | Enable Redis Cluster mode. |
REDIS_CLUSTERS | (empty) | Cluster nodes. Format: <ip1>:<port1>,<ip2>:<port2>,<ip3>:<port3> |
REDIS_CLUSTERS_PASSWORD | (empty) | Password for the Redis Cluster. |
Celery Configuration
Configure the background task queue used for dataset indexing, email sending, and scheduled jobs.CELERY_BROKER_URL
Default:redis://:difyai123456@redis:6379/1
Redis connection URL for the Celery message broker.
Direct connection format:
| Variable | Default | Description |
|---|---|---|
CELERY_BACKEND | redis | Where Celery stores task results. Options: redis (fast, in-memory) or database (stores in your main database). |
BROKER_USE_SSL | false | Auto-enabled when CELERY_BROKER_URL uses rediss:// scheme. Applies the Redis SSL certificate settings to the broker connection. |
CELERY_USE_SENTINEL | false | Enable Redis Sentinel mode for the Celery broker. |
CELERY_SENTINEL_MASTER_NAME | (empty) | Sentinel service name (Master Name). |
CELERY_SENTINEL_PASSWORD | (empty) | Password for Sentinel authentication. Separate from REDIS_SENTINEL_PASSWORD—they can differ if you use different Sentinel clusters for caching vs task queuing. |
CELERY_SENTINEL_SOCKET_TIMEOUT | 0.1 | Timeout for connecting to Sentinel in seconds. |
CELERY_TASK_ANNOTATIONS | null | Apply runtime settings to specific tasks (e.g., rate limits). Format: JSON dictionary. Example: {"tasks.add": {"rate_limit": "10/s"}}. Most users don’t need this. |
CORS Configuration
Controls cross-domain access policies for the frontend.| Variable | Default | Description |
|---|---|---|
WEB_API_CORS_ALLOW_ORIGINS | * | Allowed origins for cross-origin requests to the Web API. Example: https://dify.app |
CONSOLE_CORS_ALLOW_ORIGINS | * | Allowed origins for cross-origin requests to the console API. If not set, falls back to CONSOLE_WEB_URL. |
COOKIE_DOMAIN | (empty) | Set to the shared top-level domain (e.g., example.com) when frontend and backend run on different subdomains. This allows authentication cookies to be shared across subdomains. When empty, cookies use the most secure __Host- prefix and are locked to a single domain. |
NEXT_PUBLIC_COOKIE_DOMAIN | (empty) | Frontend flag for cross-subdomain cookies. Set to 1 (or any non-empty value) to enable—the actual domain is read from COOKIE_DOMAIN on the backend. |
NEXT_PUBLIC_BATCH_CONCURRENCY | 5 | Frontend-only. Controls how many concurrent API calls the UI makes during batch operations. |
File Storage Configuration
Configure where Dify stores uploaded files, dataset documents, and encryption keys. Each storage type has its own credential variables—configure only the one you’re using.STORAGE_TYPE
Default:opendal
Selects the file storage backend. Supported values: opendal, s3, azure-blob, aliyun-oss, google-storage, huawei-obs, volcengine-tos, tencent-cos, baidu-obs, oci-storage, supabase, clickzetta-volume, local (deprecated; internally uses OpenDAL with filesystem scheme).
OpenDAL (Default)
OpenDAL (Default)
OPENDAL_<SCHEME>_* and passes them to OpenDAL. For example, with OPENDAL_SCHEME=s3, set OPENDAL_S3_ACCESS_KEY_ID, OPENDAL_S3_SECRET_ACCESS_KEY, etc.| Variable | Default | Description |
|---|---|---|
OPENDAL_SCHEME | fs | Storage service to use. Examples: fs (local filesystem), s3, gcs, azblob. |
fs scheme:| Variable | Default | Description |
|---|---|---|
OPENDAL_FS_ROOT | storage | Root directory for local filesystem storage. Created automatically if it doesn’t exist. |
S3
S3
| Variable | Default | Description |
|---|---|---|
S3_ENDPOINT | (empty) | S3 endpoint address. Required for non-AWS S3-compatible services (MinIO, etc.). |
S3_REGION | us-east-1 | S3 region. |
S3_BUCKET_NAME | difyai | S3 bucket name. |
S3_ACCESS_KEY | (empty) | S3 Access Key. Not needed when using IAM roles. |
S3_SECRET_KEY | (empty) | S3 Secret Key. Not needed when using IAM roles. |
S3_USE_AWS_MANAGED_IAM | false | Use AWS IAM roles (EC2 instance profile, ECS task role) instead of explicit access key/secret key. When enabled, boto3 auto-discovers credentials from the instance metadata. |
Azure Blob
Azure Blob
| Variable | Default | Description |
|---|---|---|
AZURE_BLOB_ACCOUNT_NAME | difyai | Azure storage account name. |
AZURE_BLOB_ACCOUNT_KEY | difyai | Azure storage account key. |
AZURE_BLOB_CONTAINER_NAME | difyai-container | Azure Blob container name. |
AZURE_BLOB_ACCOUNT_URL | https://<your_account_name>.blob.core.windows.net | Azure Blob account URL. |
Google Cloud Storage
Google Cloud Storage
| Variable | Default | Description |
|---|---|---|
GOOGLE_STORAGE_BUCKET_NAME | (empty) | Google Cloud Storage bucket name. |
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 | (empty) | Base64-encoded service account JSON key. |
Alibaba Cloud OSS
Alibaba Cloud OSS
| Variable | Default | Description |
|---|---|---|
ALIYUN_OSS_BUCKET_NAME | (empty) | OSS bucket name. |
ALIYUN_OSS_ACCESS_KEY | (empty) | OSS access key. |
ALIYUN_OSS_SECRET_KEY | (empty) | OSS secret key. |
ALIYUN_OSS_ENDPOINT | https://oss-ap-southeast-1-internal.aliyuncs.com | OSS endpoint. Regions and endpoints reference. |
ALIYUN_OSS_REGION | ap-southeast-1 | OSS region. |
ALIYUN_OSS_AUTH_VERSION | v4 | OSS authentication version. |
ALIYUN_OSS_PATH | (empty) | Object path prefix. Don’t start with /. Reference. |
ALIYUN_CLOUDBOX_ID | (empty) | CloudBox ID for CloudBox-based OSS deployments. |
Tencent Cloud COS
Tencent Cloud COS
| Variable | Default | Description |
|---|---|---|
TENCENT_COS_BUCKET_NAME | (empty) | COS bucket name. |
TENCENT_COS_SECRET_KEY | (empty) | COS secret key. |
TENCENT_COS_SECRET_ID | (empty) | COS secret ID. |
TENCENT_COS_REGION | (empty) | COS region, e.g., ap-guangzhou. Reference. |
TENCENT_COS_SCHEME | (empty) | Protocol to access COS (http or https). |
TENCENT_COS_CUSTOM_DOMAIN | (empty) | Custom domain for COS access. |
OCI Object Storage
OCI Object Storage
| Variable | Default | Description |
|---|---|---|
OCI_ENDPOINT | (empty) | OCI endpoint URL. |
OCI_BUCKET_NAME | (empty) | OCI bucket name. |
OCI_ACCESS_KEY | (empty) | OCI access key. |
OCI_SECRET_KEY | (empty) | OCI secret key. |
OCI_REGION | us-ashburn-1 | OCI region. |
Huawei OBS
Huawei OBS
| Variable | Default | Description |
|---|---|---|
HUAWEI_OBS_BUCKET_NAME | (empty) | OBS bucket name. |
HUAWEI_OBS_ACCESS_KEY | (empty) | OBS access key. |
HUAWEI_OBS_SECRET_KEY | (empty) | OBS secret key. |
HUAWEI_OBS_SERVER | (empty) | OBS server URL. Reference. |
HUAWEI_OBS_PATH_STYLE | false | Use path-style URLs instead of virtual-hosted-style. |
Volcengine TOS
Volcengine TOS
| Variable | Default | Description |
|---|---|---|
VOLCENGINE_TOS_BUCKET_NAME | (empty) | TOS bucket name. |
VOLCENGINE_TOS_ACCESS_KEY | (empty) | TOS access key. |
VOLCENGINE_TOS_SECRET_KEY | (empty) | TOS secret key. |
VOLCENGINE_TOS_ENDPOINT | (empty) | TOS endpoint URL. Reference. |
VOLCENGINE_TOS_REGION | (empty) | TOS region, e.g., cn-guangzhou. |
Baidu OBS
Baidu OBS
| Variable | Default | Description |
|---|---|---|
BAIDU_OBS_BUCKET_NAME | (empty) | Baidu OBS bucket name. |
BAIDU_OBS_ACCESS_KEY | (empty) | Baidu OBS access key. |
BAIDU_OBS_SECRET_KEY | (empty) | Baidu OBS secret key. |
BAIDU_OBS_ENDPOINT | (empty) | Baidu OBS server URL. |
Supabase
Supabase
| Variable | Default | Description |
|---|---|---|
SUPABASE_BUCKET_NAME | (empty) | Supabase storage bucket name. |
SUPABASE_API_KEY | (empty) | Supabase API key. |
SUPABASE_URL | (empty) | Supabase server URL. |
ClickZetta Volume
ClickZetta Volume
| Variable | Default | Description |
|---|---|---|
CLICKZETTA_VOLUME_TYPE | user | Volume type. Options: user (personal/small team), table (enterprise multi-tenant), external (data lake integration). |
CLICKZETTA_VOLUME_NAME | (empty) | External volume name (required only when TYPE=external). |
CLICKZETTA_VOLUME_TABLE_PREFIX | dataset_ | Table volume table prefix (used only when TYPE=table). |
CLICKZETTA_VOLUME_DIFY_PREFIX | dify_km | Dify file directory prefix for isolation from other apps. |
CLICKZETTA_* connection parameters configured in the Vector Database section.Archive Storage
Separate S3-compatible storage for archiving workflow run logs. Used by the paid plan retention system to archive workflow runs older than the retention period to JSONL format. RequiresBILLING_ENABLED=true.
| Variable | Default | Description |
|---|---|---|
ARCHIVE_STORAGE_ENABLED | false | Enable archive storage for workflow log archival. |
ARCHIVE_STORAGE_ENDPOINT | (empty) | S3-compatible endpoint URL. |
ARCHIVE_STORAGE_ARCHIVE_BUCKET | (empty) | Bucket for archived workflow run logs. |
ARCHIVE_STORAGE_EXPORT_BUCKET | (empty) | Bucket for workflow run exports. |
ARCHIVE_STORAGE_ACCESS_KEY | (empty) | Access key. |
ARCHIVE_STORAGE_SECRET_KEY | (empty) | Secret key. |
ARCHIVE_STORAGE_REGION | auto | Storage region. |
Vector Database Configuration
Configure the vector database used for knowledge base embedding storage and similarity search. Each provider has its own set of credential variables—configure only the one you’re using.VECTOR_STORE
Default:weaviate
Selects the vector database backend. If a dataset already has an index, the dataset’s stored type takes precedence over this setting. When switching providers in Docker Compose, COMPOSE_PROFILES automatically starts the matching container based on this value.
Supported values: weaviate, oceanbase, seekdb, qdrant, milvus, myscale, relyt, pgvector, pgvecto-rs, chroma, opensearch, oracle, tencent, elasticsearch, elasticsearch-ja, analyticdb, couchbase, vikingdb, opengauss, tablestore, vastbase, tidb, tidb_on_qdrant, baidu, lindorm, huawei_cloud, upstash, matrixone, clickzetta, alibabacloud_mysql, iris, hologres.
| Variable | Default | Description |
|---|---|---|
VECTOR_INDEX_NAME_PREFIX | Vector_index | Prefix added to collection names in the vector database. Change this if you share a vector database instance across multiple Dify deployments. |
Weaviate
Weaviate
| Variable | Default | Description |
|---|---|---|
WEAVIATE_ENDPOINT | http://weaviate:8080 | Weaviate REST API endpoint. |
WEAVIATE_API_KEY | (empty) | API key for Weaviate authentication. |
WEAVIATE_GRPC_ENDPOINT | grpc://weaviate:50051 | Separate gRPC endpoint for high-performance binary protocol. Significantly faster for batch operations. Falls back to inferring from HTTP endpoint if not set. |
WEAVIATE_TOKENIZATION | word | Tokenization method for text fields. Options: word (splits on whitespace and punctuation), whitespace (splits on whitespace only), character (character-level, better for CJK languages). |
OceanBase / seekdb
OceanBase / seekdb
| Variable | Default | Description |
|---|---|---|
OCEANBASE_VECTOR_HOST | oceanbase | Hostname or IP address. |
OCEANBASE_VECTOR_PORT | 2881 | Port number. |
OCEANBASE_VECTOR_USER | root@test | Database username. |
OCEANBASE_VECTOR_PASSWORD | difyai123456 | Database password. |
OCEANBASE_VECTOR_DATABASE | test | Database name. |
OCEANBASE_CLUSTER_NAME | difyai | Cluster name (Docker deployment only). |
OCEANBASE_MEMORY_LIMIT | 6G | Memory limit for OceanBase (Docker deployment only). |
SEEKDB_MEMORY_LIMIT | 2G | Memory limit for seekdb (Docker deployment only). |
OCEANBASE_ENABLE_HYBRID_SEARCH | false | Enable fulltext index for BM25 queries alongside vector search. Requires OceanBase >= 4.3.5.1. Collections must be recreated after enabling. |
OCEANBASE_FULLTEXT_PARSER | ik | Fulltext parser. Built-in: ngram, beng, space, ngram2, ik. External (require plugin): japanese_ftparser, thai_ftparser. |
Qdrant
Qdrant
| Variable | Default | Description |
|---|---|---|
QDRANT_URL | http://qdrant:6333 | Qdrant endpoint address. |
QDRANT_API_KEY | difyai123456 | API key for Qdrant. |
QDRANT_CLIENT_TIMEOUT | 20 | Client timeout in seconds. |
QDRANT_GRPC_ENABLED | false | Enable gRPC communication. |
QDRANT_GRPC_PORT | 6334 | gRPC port. |
QDRANT_REPLICATION_FACTOR | 1 | Number of replicas per shard. |
Milvus
Milvus
| Variable | Default | Description |
|---|---|---|
MILVUS_URI | http://host.docker.internal:19530 | Milvus URI. For Zilliz Cloud, use the Public Endpoint. |
MILVUS_DATABASE | (empty) | Database name. |
MILVUS_TOKEN | (empty) | Authentication token. For Zilliz Cloud, use the API Key. |
MILVUS_USER | (empty) | Username. |
MILVUS_PASSWORD | (empty) | Password. |
MILVUS_ENABLE_HYBRID_SEARCH | false | Enable BM25 sparse index for full-text search alongside vector similarity. Requires Milvus >= 2.5.0. If the collection was created without this enabled, it must be recreated. |
MILVUS_ANALYZER_PARAMS | (empty) | Analyzer parameters for text fields. |
MyScale
MyScale
| Variable | Default | Description |
|---|---|---|
MYSCALE_HOST | myscale | MyScale host. |
MYSCALE_PORT | 8123 | MyScale port. |
MYSCALE_USER | default | Username. |
MYSCALE_PASSWORD | (empty) | Password. |
MYSCALE_DATABASE | dify | Database name. |
MYSCALE_FTS_PARAMS | (empty) | Full-text search params. Multi-language support reference. |
Couchbase
Couchbase
| Variable | Default | Description |
|---|---|---|
COUCHBASE_CONNECTION_STRING | couchbase://couchbase-server | Connection string for the Couchbase cluster. |
COUCHBASE_USER | Administrator | Username. |
COUCHBASE_PASSWORD | password | Password. |
COUCHBASE_BUCKET_NAME | Embeddings | Bucket name. |
COUCHBASE_SCOPE_NAME | _default | Scope name. |
Hologres
Hologres
| Variable | Default | Description |
|---|---|---|
HOLOGRES_HOST | (empty) | Hostname. |
HOLOGRES_PORT | 80 | Port number. |
HOLOGRES_DATABASE | (empty) | Database name. |
HOLOGRES_ACCESS_KEY_ID | (empty) | Access key ID (used as PG username). |
HOLOGRES_ACCESS_KEY_SECRET | (empty) | Access key secret (used as PG password). |
HOLOGRES_SCHEMA | public | Schema name. |
HOLOGRES_TOKENIZER | jieba | Tokenizer for text fields. |
HOLOGRES_DISTANCE_METHOD | Cosine | Distance method. |
HOLOGRES_BASE_QUANTIZATION_TYPE | rabitq | Quantization type. |
HOLOGRES_MAX_DEGREE | 64 | HNSW max degree. |
HOLOGRES_EF_CONSTRUCTION | 400 | HNSW ef_construction parameter. |
PGVector
PGVector
| Variable | Default | Description |
|---|---|---|
PGVECTOR_HOST | pgvector | Hostname. |
PGVECTOR_PORT | 5432 | Port number. |
PGVECTOR_USER | postgres | Username. |
PGVECTOR_PASSWORD | difyai123456 | Password. |
PGVECTOR_DATABASE | dify | Database name. |
PGVECTOR_MIN_CONNECTION | 1 | Minimum pool connections. |
PGVECTOR_MAX_CONNECTION | 5 | Maximum pool connections. |
PGVECTOR_PG_BIGM | false | Enable pg_bigm extension for full-text search. |
Vastbase
Vastbase
| Variable | Default | Description |
|---|---|---|
VASTBASE_HOST | vastbase | Hostname. |
VASTBASE_PORT | 5432 | Port number. |
VASTBASE_USER | dify | Username. |
VASTBASE_PASSWORD | Difyai123456 | Password. |
VASTBASE_DATABASE | dify | Database name. |
VASTBASE_MIN_CONNECTION | 1 | Minimum pool connections. |
VASTBASE_MAX_CONNECTION | 5 | Maximum pool connections. |
PGVecto.RS
PGVecto.RS
| Variable | Default | Description |
|---|---|---|
PGVECTO_RS_HOST | pgvecto-rs | Hostname. |
PGVECTO_RS_PORT | 5432 | Port number. |
PGVECTO_RS_USER | postgres | Username. |
PGVECTO_RS_PASSWORD | difyai123456 | Password. |
PGVECTO_RS_DATABASE | dify | Database name. |
AnalyticDB
AnalyticDB
| Variable | Default | Description |
|---|---|---|
ANALYTICDB_KEY_ID | (empty) | Aliyun access key ID. Create AccessKey. |
ANALYTICDB_KEY_SECRET | (empty) | Aliyun access key secret. |
ANALYTICDB_REGION_ID | cn-hangzhou | Region identifier. |
ANALYTICDB_INSTANCE_ID | (empty) | Instance ID, e.g., gp-xxxxxx. Create instance. |
ANALYTICDB_ACCOUNT | (empty) | Account name. Create account. |
ANALYTICDB_PASSWORD | (empty) | Account password. |
ANALYTICDB_NAMESPACE | dify | Namespace (schema). Created automatically if not exists. |
ANALYTICDB_NAMESPACE_PASSWORD | (empty) | Namespace password. Used when creating a new namespace. |
ANALYTICDB_HOST | (empty) | Direct connection host (alternative to API-based access). |
ANALYTICDB_PORT | 5432 | Direct connection port. |
ANALYTICDB_MIN_CONNECTION | 1 | Minimum pool connections. |
ANALYTICDB_MAX_CONNECTION | 5 | Maximum pool connections. |
TiDB Vector
TiDB Vector
| Variable | Default | Description |
|---|---|---|
TIDB_VECTOR_HOST | tidb | Hostname. |
TIDB_VECTOR_PORT | 4000 | Port number. |
TIDB_VECTOR_USER | (empty) | Username. |
TIDB_VECTOR_PASSWORD | (empty) | Password. |
TIDB_VECTOR_DATABASE | dify | Database name. |
MatrixOne
MatrixOne
| Variable | Default | Description |
|---|---|---|
MATRIXONE_HOST | matrixone | Hostname. |
MATRIXONE_PORT | 6001 | Port number. |
MATRIXONE_USER | dump | Username. |
MATRIXONE_PASSWORD | 111 | Password. |
MATRIXONE_DATABASE | dify | Database name. |
Chroma
Chroma
| Variable | Default | Description |
|---|---|---|
CHROMA_HOST | 127.0.0.1 | Chroma server host. |
CHROMA_PORT | 8000 | Chroma server port. |
CHROMA_TENANT | default_tenant | Tenant name. |
CHROMA_DATABASE | default_database | Database name. |
CHROMA_AUTH_PROVIDER | chromadb.auth.token_authn.TokenAuthClientProvider | Auth provider class. |
CHROMA_AUTH_CREDENTIALS | (empty) | Auth credentials. |
Oracle
Oracle
| Variable | Default | Description |
|---|---|---|
ORACLE_USER | dify | Oracle username. |
ORACLE_PASSWORD | dify | Oracle password. |
ORACLE_DSN | oracle:1521/FREEPDB1 | Data source name. |
ORACLE_CONFIG_DIR | /app/api/storage/wallet | Oracle configuration directory. |
ORACLE_WALLET_LOCATION | /app/api/storage/wallet | Wallet location for Autonomous DB. |
ORACLE_WALLET_PASSWORD | dify | Wallet password. |
ORACLE_IS_AUTONOMOUS | false | Whether using Oracle Autonomous Database. |
AlibabaCloud MySQL
AlibabaCloud MySQL
| Variable | Default | Description |
|---|---|---|
ALIBABACLOUD_MYSQL_HOST | 127.0.0.1 | Hostname. |
ALIBABACLOUD_MYSQL_PORT | 3306 | Port number. |
ALIBABACLOUD_MYSQL_USER | root | Username. |
ALIBABACLOUD_MYSQL_PASSWORD | difyai123456 | Password. |
ALIBABACLOUD_MYSQL_DATABASE | dify | Database name. |
ALIBABACLOUD_MYSQL_MAX_CONNECTION | 5 | Maximum pool connections. |
ALIBABACLOUD_MYSQL_HNSW_M | 6 | HNSW M parameter. |
Relyt
Relyt
| Variable | Default | Description |
|---|---|---|
RELYT_HOST | db | Hostname. |
RELYT_PORT | 5432 | Port number. |
RELYT_USER | postgres | Username. |
RELYT_PASSWORD | difyai123456 | Password. |
RELYT_DATABASE | postgres | Database name. |
OpenSearch
OpenSearch
| Variable | Default | Description |
|---|---|---|
OPENSEARCH_HOST | opensearch | Hostname. |
OPENSEARCH_PORT | 9200 | Port number. |
OPENSEARCH_SECURE | true | Use HTTPS. |
OPENSEARCH_VERIFY_CERTS | true | Verify SSL certificates. |
OPENSEARCH_AUTH_METHOD | basic | basic uses username/password. aws_managed_iam uses AWS SigV4 request signing via Boto3 credentials (for AWS Managed OpenSearch or Serverless). |
OPENSEARCH_USER | admin | Username. Only used with basic auth. |
OPENSEARCH_PASSWORD | admin | Password. Only used with basic auth. |
OPENSEARCH_AWS_REGION | ap-southeast-1 | AWS region. Only used with aws_managed_iam auth. |
OPENSEARCH_AWS_SERVICE | aoss | AWS service type: es (Managed Cluster) or aoss (OpenSearch Serverless). Only used with aws_managed_iam auth. |
Tencent Cloud VectorDB
Tencent Cloud VectorDB
| Variable | Default | Description |
|---|---|---|
TENCENT_VECTOR_DB_URL | http://127.0.0.1 | Access address. Console. |
TENCENT_VECTOR_DB_API_KEY | dify | API key. Key Management. |
TENCENT_VECTOR_DB_TIMEOUT | 30 | Request timeout in seconds. |
TENCENT_VECTOR_DB_USERNAME | dify | Account name. Account Management. |
TENCENT_VECTOR_DB_DATABASE | dify | Database name. Create Database. |
TENCENT_VECTOR_DB_SHARD | 1 | Number of shards. |
TENCENT_VECTOR_DB_REPLICAS | 2 | Number of replicas. |
TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH | false | Enable hybrid search. Sparse Vector docs. |
Elasticsearch
Elasticsearch
| Variable | Default | Description |
|---|---|---|
ELASTICSEARCH_HOST | 0.0.0.0 | Hostname. |
ELASTICSEARCH_PORT | 9200 | Port number. |
ELASTICSEARCH_USERNAME | elastic | Username. |
ELASTICSEARCH_PASSWORD | elastic | Password. |
ELASTICSEARCH_USE_CLOUD | false | Switch to Elastic Cloud mode. When true, uses ELASTICSEARCH_CLOUD_URL and ELASTICSEARCH_API_KEY instead of host/port/username/password. |
ELASTICSEARCH_CLOUD_URL | (empty) | Elastic Cloud endpoint URL. Required when ELASTICSEARCH_USE_CLOUD=true. |
ELASTICSEARCH_API_KEY | (empty) | Elastic Cloud API key. Required when ELASTICSEARCH_USE_CLOUD=true. |
ELASTICSEARCH_VERIFY_CERTS | false | Verify SSL certificates. |
ELASTICSEARCH_CA_CERTS | (empty) | Path to CA certificates. |
ELASTICSEARCH_REQUEST_TIMEOUT | 100000 | Request timeout in milliseconds. |
ELASTICSEARCH_RETRY_ON_TIMEOUT | true | Retry on timeout. |
ELASTICSEARCH_MAX_RETRIES | 10 | Maximum retry attempts. |
Baidu Vector DB
Baidu Vector DB
| Variable | Default | Description |
|---|---|---|
BAIDU_VECTOR_DB_ENDPOINT | http://127.0.0.1:5287 | Endpoint URL. |
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS | 30000 | Connection timeout in milliseconds. |
BAIDU_VECTOR_DB_ACCOUNT | root | Account name. |
BAIDU_VECTOR_DB_API_KEY | dify | API key. |
BAIDU_VECTOR_DB_DATABASE | dify | Database name. |
BAIDU_VECTOR_DB_SHARD | 1 | Number of shards. |
BAIDU_VECTOR_DB_REPLICAS | 3 | Number of replicas. |
BAIDU_VECTOR_DB_INVERTED_INDEX_ANALYZER | DEFAULT_ANALYZER | Inverted index analyzer. |
BAIDU_VECTOR_DB_INVERTED_INDEX_PARSER_MODE | COARSE_MODE | Inverted index parser mode. |
VikingDB
VikingDB
| Variable | Default | Description |
|---|---|---|
VIKINGDB_ACCESS_KEY | (empty) | Access key. |
VIKINGDB_SECRET_KEY | (empty) | Secret key. |
VIKINGDB_REGION | cn-shanghai | Region. |
VIKINGDB_HOST | api-vikingdb.xxx.volces.com | API host. Replace with your region-specific endpoint. |
VIKINGDB_SCHEMA | http | Protocol scheme (http or https). |
VIKINGDB_CONNECTION_TIMEOUT | 30 | Connection timeout in seconds. |
VIKINGDB_SOCKET_TIMEOUT | 30 | Socket timeout in seconds. |
Lindorm
Lindorm
| Variable | Default | Description |
|---|---|---|
LINDORM_URL | http://localhost:30070 | Lindorm search engine URL. Console. |
LINDORM_USERNAME | admin | Username. |
LINDORM_PASSWORD | admin | Password. |
LINDORM_USING_UGC | true | Use UGC mode. |
LINDORM_QUERY_TIMEOUT | 1 | Query timeout in seconds. |
OpenGauss
OpenGauss
| Variable | Default | Description |
|---|---|---|
OPENGAUSS_HOST | opengauss | Hostname. |
OPENGAUSS_PORT | 6600 | Port number. |
OPENGAUSS_USER | postgres | Username. |
OPENGAUSS_PASSWORD | Dify@123 | Password. |
OPENGAUSS_DATABASE | dify | Database name. |
OPENGAUSS_MIN_CONNECTION | 1 | Minimum pool connections. |
OPENGAUSS_MAX_CONNECTION | 5 | Maximum pool connections. |
OPENGAUSS_ENABLE_PQ | false | Enable PQ acceleration. |
Huawei Cloud Search
Huawei Cloud Search
| Variable | Default | Description |
|---|---|---|
HUAWEI_CLOUD_HOSTS | https://127.0.0.1:9200 | Cluster endpoint URL. |
HUAWEI_CLOUD_USER | admin | Username. |
HUAWEI_CLOUD_PASSWORD | admin | Password. |
Upstash Vector
Upstash Vector
| Variable | Default | Description |
|---|---|---|
UPSTASH_VECTOR_URL | (empty) | Upstash Vector endpoint URL. |
UPSTASH_VECTOR_TOKEN | (empty) | Upstash Vector API token. |
TableStore
TableStore
| Variable | Default | Description |
|---|---|---|
TABLESTORE_ENDPOINT | https://instance-name.cn-hangzhou.ots.aliyuncs.com | Endpoint address. Replace instance-name with your instance. |
TABLESTORE_INSTANCE_NAME | (empty) | Instance name. |
TABLESTORE_ACCESS_KEY_ID | (empty) | Access key ID. |
TABLESTORE_ACCESS_KEY_SECRET | (empty) | Access key secret. |
TABLESTORE_NORMALIZE_FULLTEXT_BM25_SCORE | false | Normalize fulltext BM25 scores. |
ClickZetta
ClickZetta
| Variable | Default | Description |
|---|---|---|
CLICKZETTA_USERNAME | (empty) | Username. |
CLICKZETTA_PASSWORD | (empty) | Password. |
CLICKZETTA_INSTANCE | (empty) | Instance name. |
CLICKZETTA_SERVICE | api.clickzetta.com | Service endpoint. |
CLICKZETTA_WORKSPACE | quick_start | Workspace name. |
CLICKZETTA_VCLUSTER | default_ap | Virtual cluster. |
CLICKZETTA_SCHEMA | dify | Schema name. |
CLICKZETTA_BATCH_SIZE | 100 | Batch size for operations. |
CLICKZETTA_ENABLE_INVERTED_INDEX | true | Enable inverted index. |
CLICKZETTA_ANALYZER_TYPE | chinese | Analyzer type. |
CLICKZETTA_ANALYZER_MODE | smart | Analyzer mode. |
CLICKZETTA_VECTOR_DISTANCE_FUNCTION | cosine_distance | Distance function. |
InterSystems IRIS
InterSystems IRIS
| Variable | Default | Description |
|---|---|---|
IRIS_HOST | iris | Hostname. |
IRIS_SUPER_SERVER_PORT | 1972 | Super server port. |
IRIS_USER | _SYSTEM | Username. |
IRIS_PASSWORD | Dify@1234 | Password. |
IRIS_DATABASE | USER | Database name. |
IRIS_SCHEMA | dify | Schema name. |
IRIS_CONNECTION_URL | (empty) | Full connection URL (overrides individual settings). |
IRIS_MIN_CONNECTION | 1 | Minimum pool connections. |
IRIS_MAX_CONNECTION | 3 | Maximum pool connections. |
IRIS_TEXT_INDEX | true | Enable text indexing. |
IRIS_TEXT_INDEX_LANGUAGE | en | Text index language. |
Knowledge Configuration
| Variable | Default | Description |
|---|---|---|
UPLOAD_FILE_SIZE_LIMIT | 15 | Maximum file size in MB for document uploads (PDFs, Word docs, etc.). Users see a “file too large” error when exceeded. Does not apply to images, videos, or audio—they have separate limits below. |
UPLOAD_FILE_BATCH_LIMIT | 5 | Maximum number of files the frontend allows per upload batch. |
UPLOAD_FILE_EXTENSION_BLACKLIST | (empty) | Security blocklist of file extensions that cannot be uploaded. Comma-separated, lowercase, no dots. Example: exe,bat,cmd,com,scr,vbs,ps1,msi,dll. Empty allows all types. |
SINGLE_CHUNK_ATTACHMENT_LIMIT | 10 | Maximum number of images that can be embedded in a single knowledge base segment (chunk). |
IMAGE_FILE_BATCH_LIMIT | 10 | Maximum number of image files per upload batch. |
ATTACHMENT_IMAGE_FILE_SIZE_LIMIT | 2 | Maximum size in MB for images fetched from external URLs during knowledge base indexing. Images larger than this are skipped. Different from UPLOAD_IMAGE_FILE_SIZE_LIMIT which applies to direct uploads. |
ATTACHMENT_IMAGE_DOWNLOAD_TIMEOUT | 60 | Timeout in seconds when downloading images from external URLs during knowledge base indexing. Slow or unresponsive image servers are abandoned after this timeout. |
ETL_TYPE | dify | Document extraction library. dify supports txt, md, pdf, html, xlsx, docx, csv. Unstructured adds support for doc, msg, eml, ppt, pptx, xml, epub (requires UNSTRUCTURED_API_URL). |
UNSTRUCTURED_API_URL | (empty) | Unstructured.io API endpoint. Required when ETL_TYPE is Unstructured. Also needed for .ppt file support. Example: http://unstructured:8000/general/v0/general. |
UNSTRUCTURED_API_KEY | (empty) | API key for Unstructured.io authentication. |
SCARF_NO_ANALYTICS | true | Disable Unstructured library’s telemetry/analytics collection. |
TOP_K_MAX_VALUE | 10 | Maximum value users can set for the top_k parameter in knowledge base retrieval (how many results to return per search). |
DATASET_MAX_SEGMENTS_PER_REQUEST | 0 | Maximum number of segments per dataset API request. 0 means unlimited. |
Annotation Import
| Variable | Default | Description |
|---|---|---|
ANNOTATION_IMPORT_FILE_SIZE_LIMIT | 2 | Maximum CSV file size in MB for annotation import. Returns HTTP 413 when exceeded. |
ANNOTATION_IMPORT_MAX_RECORDS | 10000 | Maximum number of records per annotation import. Files with more records must be split into batches. |
ANNOTATION_IMPORT_MIN_RECORDS | 1 | Minimum number of valid records required per annotation import. |
ANNOTATION_IMPORT_RATE_LIMIT_PER_MINUTE | 5 | Maximum annotation import requests per minute per workspace. Returns HTTP 429 when exceeded. |
ANNOTATION_IMPORT_RATE_LIMIT_PER_HOUR | 20 | Maximum annotation import requests per hour per workspace. |
ANNOTATION_IMPORT_MAX_CONCURRENT | 5 | Maximum concurrent annotation import tasks per workspace. Stale tasks are auto-cleaned after 2 minutes. |
Model Configuration
| Variable | Default | Description |
|---|---|---|
PROMPT_GENERATION_MAX_TOKENS | 512 | Maximum tokens when the system auto-generates a prompt using an LLM. Prevents runaway generations that waste API quota. |
CODE_GENERATION_MAX_TOKENS | 1024 | Maximum tokens when the system auto-generates code using an LLM. |
PLUGIN_BASED_TOKEN_COUNTING_ENABLED | false | Use plugin-based token counting for accurate usage tracking. When disabled, token counting returns 0 (faster but cost tracking is less accurate). |
Multi-modal Configuration
| Variable | Default | Description |
|---|---|---|
MULTIMODAL_SEND_FORMAT | base64 | How files are sent to multi-modal LLMs. base64 embeds file data in the request (more compatible, works offline, larger payloads). url sends a signed URL for the model to fetch (faster, smaller requests, but the model must be able to reach FILES_URL). |
UPLOAD_IMAGE_FILE_SIZE_LIMIT | 10 | Maximum image file size in MB for direct uploads (jpg, png, webp, gif, svg). |
UPLOAD_VIDEO_FILE_SIZE_LIMIT | 100 | Maximum video file size in MB for direct uploads (mp4, mov, mpeg, webm). |
UPLOAD_AUDIO_FILE_SIZE_LIMIT | 50 | Maximum audio file size in MB for direct uploads (mp3, m4a, wav, amr, mpga). |
NGINX_CLIENT_MAX_BODY_SIZE (default 100M). If you increase any upload limit above 100 MB, also increase NGINX_CLIENT_MAX_BODY_SIZE to match—otherwise Nginx rejects the upload with a 413 error.Sentry Configuration
Sentry provides error tracking and performance monitoring. Each service has its own DSN to separate error reporting.| Variable | Default | Description |
|---|---|---|
SENTRY_DSN | (empty) | Sentry DSN shared across services. |
API_SENTRY_DSN | (empty) | Sentry DSN for the API service. Overrides SENTRY_DSN if set. Empty disables Sentry for the backend. |
API_SENTRY_TRACES_SAMPLE_RATE | 1.0 | Fraction of requests to include in performance tracing (0.01 = 1%, 1.0 = 100%). Traces track request flow across services. |
API_SENTRY_PROFILES_SAMPLE_RATE | 1.0 | Fraction of requests to include in CPU/memory profiling (0.01 = 1%). Profiles show where time is spent in code. |
WEB_SENTRY_DSN | (empty) | Sentry DSN for the web frontend (Next.js). Frontend-only. |
PLUGIN_SENTRY_ENABLED | false | Enable Sentry for the plugin daemon service. |
PLUGIN_SENTRY_DSN | (empty) | Sentry DSN for the plugin daemon. |
Notion Integration Configuration
Connect Dify to Notion as a knowledge base data source. Get integration credentials at https://www.notion.so/my-integrations.| Variable | Default | Description |
|---|---|---|
NOTION_INTEGRATION_TYPE | public | public uses standard OAuth 2.0 (requires HTTPS redirect URL, needs CLIENT_ID + CLIENT_SECRET). internal uses a direct integration token (works with HTTP). Use internal for local deployments. |
NOTION_CLIENT_SECRET | (empty) | OAuth client secret. Required for public integration. |
NOTION_CLIENT_ID | (empty) | OAuth client ID. Required for public integration. |
NOTION_INTERNAL_SECRET | (empty) | Direct integration token from Notion. Required for internal integration. |
Mail Configuration
Dify sends emails for account invitations, password resets, login codes, and Human Input node notifications. Configure one of the three supported providers. Email links requireCONSOLE_WEB_URL to be set—see Common Variables.
| Variable | Default | Description |
|---|---|---|
MAIL_TYPE | resend | Mail provider: resend, smtp, or sendgrid. |
MAIL_DEFAULT_SEND_FROM | (empty) | Default “From” address for all outgoing emails. Required. |
Resend
Resend
| Variable | Default | Description |
|---|---|---|
RESEND_API_URL | https://api.resend.com | Resend API endpoint. Override for self-hosted Resend or proxy. |
RESEND_API_KEY | (empty) | Resend API key. Required when MAIL_TYPE=resend. |
SMTP
SMTP
SMTP_USE_TLS=true, SMTP_OPPORTUNISTIC_TLS=false, port 465), STARTTLS (SMTP_USE_TLS=true, SMTP_OPPORTUNISTIC_TLS=true, port 587), or plain (SMTP_USE_TLS=false, port 25).| Variable | Default | Description |
|---|---|---|
SMTP_SERVER | (empty) | SMTP server address. |
SMTP_PORT | 465 | SMTP server port. Use 587 for STARTTLS mode. |
SMTP_USERNAME | (empty) | SMTP username. Can be empty for IP-whitelisted servers. |
SMTP_PASSWORD | (empty) | SMTP password. Can be empty for IP-whitelisted servers. |
SMTP_USE_TLS | true | Enable TLS. When true with SMTP_OPPORTUNISTIC_TLS=false, uses implicit TLS (SMTP_SSL). |
SMTP_OPPORTUNISTIC_TLS | false | Use STARTTLS (explicit TLS) instead of implicit TLS. Must be used with SMTP_USE_TLS=true. |
SMTP_LOCAL_HOSTNAME | (empty) | Override the hostname sent in SMTP HELO/EHLO. Required in Docker when your SMTP server rejects container hostnames (common with Google Workspace, Microsoft 365). Set to your domain, e.g., mail.yourdomain.com. |
SendGrid
SendGrid
| Variable | Default | Description |
|---|---|---|
SENDGRID_API_KEY | (empty) | SendGrid API key. Required when MAIL_TYPE=sendgrid. |
Others Configuration
Indexing
| Variable | Default | Description |
|---|---|---|
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH | 4000 | Maximum token length per text segment when chunking documents for the knowledge base. Larger values retain more context per chunk; smaller values provide finer granularity. |
Token & Invitation
All token expiry variables control how long a one-time-use token stored in Redis remains valid. After expiry, the user must request a new token.| Variable | Default | Description |
|---|---|---|
INVITE_EXPIRY_HOURS | 72 | How long a workspace invitation link stays valid (in hours). |
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES | 5 | Password reset token validity in minutes. |
EMAIL_REGISTER_TOKEN_EXPIRY_MINUTES | 5 | Email registration token validity in minutes. |
CHANGE_EMAIL_TOKEN_EXPIRY_MINUTES | 5 | Change email token validity in minutes. |
OWNER_TRANSFER_TOKEN_EXPIRY_MINUTES | 5 | Workspace owner transfer token validity in minutes. |
Code Execution Sandbox
The sandbox is a separate service that runs Python, JavaScript, and Jinja2 code nodes in isolation.| Variable | Default | Description |
|---|---|---|
CODE_EXECUTION_ENDPOINT | http://sandbox:8194 | Sandbox service endpoint. |
CODE_EXECUTION_API_KEY | dify-sandbox | API key for sandbox authentication. Must match SANDBOX_API_KEY in the sandbox service. |
CODE_EXECUTION_SSL_VERIFY | true | Verify SSL for sandbox connections. Disable for development with self-signed certificates. |
CODE_EXECUTION_CONNECT_TIMEOUT | 10 | Connection timeout in seconds. |
CODE_EXECUTION_READ_TIMEOUT | 60 | Read timeout in seconds. |
CODE_EXECUTION_WRITE_TIMEOUT | 10 | Write timeout in seconds. |
CODE_EXECUTION_POOL_MAX_CONNECTIONS | 100 | Maximum concurrent HTTP connections to the sandbox service. |
CODE_EXECUTION_POOL_MAX_KEEPALIVE_CONNECTIONS | 20 | Maximum idle connections kept alive in the sandbox connection pool. |
CODE_EXECUTION_POOL_KEEPALIVE_EXPIRY | 5.0 | Seconds before idle sandbox connections are closed. |
CODE_MAX_NUMBER | 9223372036854775807 | Maximum numeric value allowed in code node output (max 64-bit signed integer). |
CODE_MIN_NUMBER | -9223372036854775808 | Minimum numeric value allowed in code node output (min 64-bit signed integer). |
CODE_MAX_STRING_LENGTH | 400000 | Maximum string length in code node output. Prevents memory exhaustion from unbounded string generation. |
CODE_MAX_DEPTH | 5 | Maximum nesting depth for output data structures. |
CODE_MAX_PRECISION | 20 | Maximum decimal places for floating-point numbers in output. |
CODE_MAX_STRING_ARRAY_LENGTH | 30 | Maximum number of elements in a string array output. |
CODE_MAX_OBJECT_ARRAY_LENGTH | 30 | Maximum number of elements in an object array output. |
CODE_MAX_NUMBER_ARRAY_LENGTH | 1000 | Maximum number of elements in a number array output. |
TEMPLATE_TRANSFORM_MAX_LENGTH | 400000 | Maximum character length for Template Transform node output. |
Workflow Runtime
| Variable | Default | Description |
|---|---|---|
WORKFLOW_MAX_EXECUTION_STEPS | 500 | Maximum number of node executions per workflow run. Exceeding this terminates the workflow. |
WORKFLOW_MAX_EXECUTION_TIME | 1200 | Maximum wall-clock time in seconds per workflow run. Exceeding this terminates the workflow. |
WORKFLOW_CALL_MAX_DEPTH | 5 | Maximum depth for nested workflow-calls-workflow. Prevents infinite recursion. |
MAX_VARIABLE_SIZE | 204800 | Maximum size in bytes (200 KB) for a single workflow variable. |
WORKFLOW_FILE_UPLOAD_LIMIT | 10 | Maximum number of files that can be uploaded in a single workflow execution. |
WORKFLOW_NODE_EXECUTION_STORAGE | rdbms | Where workflow node execution records are stored. rdbms stores everything in the database. hybrid stores new data in object storage and reads from both. |
DSL_EXPORT_ENCRYPT_DATASET_ID | true | Encrypt dataset IDs when exporting DSL files. Set to false to export plain IDs for easier cross-environment import. |
Workflow Storage Repository
These select which backend implementation handles workflow execution data. The defaultSQLAlchemy repositories store everything in the database. Alternative implementations (e.g., Celery, Logstore) can be used for different storage strategies.
| Variable | Default | Description |
|---|---|---|
CORE_WORKFLOW_EXECUTION_REPOSITORY | core.repositories.sqlalchemy_workflow_execution_repository.SQLAlchemyWorkflowExecutionRepository | Repository implementation for workflow execution records. |
CORE_WORKFLOW_NODE_EXECUTION_REPOSITORY | core.repositories.sqlalchemy_workflow_node_execution_repository.SQLAlchemyWorkflowNodeExecutionRepository | Repository implementation for workflow node execution records. |
API_WORKFLOW_RUN_REPOSITORY | repositories.sqlalchemy_api_workflow_run_repository.DifyAPISQLAlchemyWorkflowRunRepository | Service-layer repository for workflow run API operations. |
API_WORKFLOW_NODE_EXECUTION_REPOSITORY | repositories.sqlalchemy_api_workflow_node_execution_repository.DifyAPISQLAlchemyWorkflowNodeExecutionRepository | Service-layer repository for workflow node execution API operations. |
LOOP_NODE_MAX_COUNT | 100 | Maximum iterations for Loop nodes. Prevents infinite loops. |
MAX_PARALLEL_LIMIT | 10 | Maximum number of parallel branches in a workflow. |
GraphEngine Worker Pool
| Variable | Default | Description |
|---|---|---|
GRAPH_ENGINE_MIN_WORKERS | 1 | Minimum workers per GraphEngine instance. |
GRAPH_ENGINE_MAX_WORKERS | 10 | Maximum workers per GraphEngine instance. |
GRAPH_ENGINE_SCALE_UP_THRESHOLD | 3 | Queue depth that triggers spawning additional workers. |
GRAPH_ENGINE_SCALE_DOWN_IDLE_TIME | 5.0 | Seconds of idle time before excess workers are removed. |
Workflow Log Cleanup
| Variable | Default | Description |
|---|---|---|
WORKFLOW_LOG_CLEANUP_ENABLED | false | Enable automatic cleanup of workflow execution logs at 2:00 AM daily. |
WORKFLOW_LOG_RETENTION_DAYS | 30 | Number of days to retain workflow logs before cleanup. |
WORKFLOW_LOG_CLEANUP_BATCH_SIZE | 100 | Number of log entries processed per cleanup batch. Adjust based on system performance. |
WORKFLOW_LOG_CLEANUP_SPECIFIC_WORKFLOW_IDS | (empty) | Comma-separated list of workflow IDs to limit cleanup to. When empty, all workflow logs are cleaned. |
HTTP Request Node
These configure the HTTP Request node used in workflows to call external APIs.| Variable | Default | Description |
|---|---|---|
HTTP_REQUEST_NODE_MAX_TEXT_SIZE | 1048576 | Maximum text response size in bytes (1 MB). Responses larger than this are truncated. |
HTTP_REQUEST_NODE_MAX_BINARY_SIZE | 10485760 | Maximum binary response size in bytes (10 MB). |
HTTP_REQUEST_NODE_SSL_VERIFY | true | Verify SSL certificates. Disable for testing with self-signed certificates. |
HTTP_REQUEST_MAX_CONNECT_TIMEOUT | 10 | Maximum connect timeout users can set in the workflow editor (in seconds). Per-node timeouts cannot exceed this. |
HTTP_REQUEST_MAX_READ_TIMEOUT | 600 | Maximum read timeout ceiling (in seconds). |
HTTP_REQUEST_MAX_WRITE_TIMEOUT | 600 | Maximum write timeout ceiling (in seconds). |
Webhook
| Variable | Default | Description |
|---|---|---|
WEBHOOK_REQUEST_BODY_MAX_SIZE | 10485760 | Maximum webhook payload size in bytes (10 MB). Larger payloads are rejected with a 413 error. |
SSRF Protection
All outbound HTTP requests from Dify (HTTP nodes, image downloads, etc.) are routed through a proxy that blocks requests to internal/private IP ranges, preventing Server-Side Request Forgery (SSRF) attacks.| Variable | Default | Description |
|---|---|---|
SSRF_PROXY_HTTP_URL | http://ssrf_proxy:3128 | SSRF proxy URL for HTTP requests. |
SSRF_PROXY_HTTPS_URL | http://ssrf_proxy:3128 | SSRF proxy URL for HTTPS requests. |
SSRF_POOL_MAX_CONNECTIONS | 100 | Maximum concurrent connections in the SSRF HTTP client pool. |
SSRF_POOL_MAX_KEEPALIVE_CONNECTIONS | 20 | Maximum idle connections kept alive in the SSRF pool. |
SSRF_POOL_KEEPALIVE_EXPIRY | 5.0 | Seconds before idle SSRF connections are closed. |
RESPECT_XFORWARD_HEADERS_ENABLED | false | Trust X-Forwarded-For/Proto/Port headers from reverse proxies. Only enable behind a single trusted reverse proxy—otherwise allows IP spoofing. |
Agent Configuration
| Variable | Default | Description |
|---|---|---|
MAX_TOOLS_NUM | 10 | Maximum number of tools an agent can use simultaneously. |
MAX_ITERATIONS_NUM | 99 | Maximum reasoning iterations per agent execution. Prevents infinite agent loops. |
Web Frontend Service
These variables are used by the Next.js web frontend container only—they do not affect the Python backend.| Variable | Default | Description |
|---|---|---|
TEXT_GENERATION_TIMEOUT_MS | 60000 | Frontend timeout for streaming text generation UI. If a stream stalls for longer than this, the UI pauses rendering. |
ALLOW_UNSAFE_DATA_SCHEME | false | Allow rendering URLs with the data: scheme. Disabled by default for security. |
MAX_TREE_DEPTH | 50 | Maximum node tree depth in the workflow editor UI. |
Database Service
These configure the database containers directly in Docker Compose.| Variable | Default | Description |
|---|---|---|
PGDATA | /var/lib/postgresql/data/pgdata | PostgreSQL data directory inside the container. |
MYSQL_HOST_VOLUME | ./volumes/mysql/data | Host path mounted as MySQL data volume. |
Sandbox Service
The sandbox is an isolated service for executing code nodes (Python, JavaScript, Jinja2). Network access can be disabled for security.| Variable | Default | Description |
|---|---|---|
SANDBOX_API_KEY | dify-sandbox | API key for sandbox authentication. Must match CODE_EXECUTION_API_KEY in the API service. |
SANDBOX_GIN_MODE | release | Sandbox service mode: release or debug. |
SANDBOX_WORKER_TIMEOUT | 15 | Maximum execution time in seconds for a single code run. |
SANDBOX_ENABLE_NETWORK | true | Allow code to make outbound HTTP requests. Disable to prevent code nodes from accessing external services. |
SANDBOX_HTTP_PROXY | http://ssrf_proxy:3128 | HTTP proxy for SSRF protection when network is enabled. |
SANDBOX_HTTPS_PROXY | http://ssrf_proxy:3128 | HTTPS proxy for SSRF protection. |
SANDBOX_PORT | 8194 | Sandbox service port. |
Nginx Reverse Proxy
| Variable | Default | Description |
|---|---|---|
NGINX_SERVER_NAME | _ | Nginx server name. _ matches any hostname. |
NGINX_HTTPS_ENABLED | false | Enable HTTPS. When true, place your SSL certificate and key in ./nginx/ssl/. |
NGINX_PORT | 80 | HTTP port. |
NGINX_SSL_PORT | 443 | HTTPS port (only used when NGINX_HTTPS_ENABLED=true). |
NGINX_SSL_CERT_FILENAME | dify.crt | SSL certificate filename in ./nginx/ssl/. |
NGINX_SSL_CERT_KEY_FILENAME | dify.key | SSL private key filename in ./nginx/ssl/. |
NGINX_SSL_PROTOCOLS | TLSv1.2 TLSv1.3 | Allowed TLS protocol versions. |
NGINX_WORKER_PROCESSES | auto | Number of Nginx worker processes. auto matches CPU core count. |
NGINX_CLIENT_MAX_BODY_SIZE | 100M | Maximum request body size. Affects file upload limits at the proxy level. |
NGINX_KEEPALIVE_TIMEOUT | 65 | Keepalive timeout in seconds. |
NGINX_PROXY_READ_TIMEOUT | 3600s | Proxy read timeout. Set high (1 hour) to support long-running SSE streams. |
NGINX_PROXY_SEND_TIMEOUT | 3600s | Proxy send timeout. |
NGINX_ENABLE_CERTBOT_CHALLENGE | false | Accept Let’s Encrypt ACME challenge requests at /.well-known/acme-challenge/. Enable for automated certificate renewal. |
CONSOLE_API_URL, CONSOLE_WEB_URL) to use https://.Certbot Configuration
| Variable | Default | Description |
|---|---|---|
CERTBOT_EMAIL | (empty) | Email address required by Let’s Encrypt for certificate notifications. |
CERTBOT_DOMAIN | (empty) | Domain name for the SSL certificate. |
CERTBOT_OPTIONS | (empty) | Additional certbot CLI options (e.g., --force-renewal, --dry-run). |
SSRF Proxy
These configure the Squid-based SSRF proxy container that blocks requests to internal/private networks.| Variable | Default | Description |
|---|---|---|
SSRF_HTTP_PORT | 3128 | Proxy listening port. |
SSRF_COREDUMP_DIR | /var/spool/squid | Core dump directory. |
SSRF_REVERSE_PROXY_PORT | 8194 | Reverse proxy port forwarded to the sandbox service. |
SSRF_SANDBOX_HOST | sandbox | Hostname of the sandbox service. |
SSRF_DEFAULT_TIME_OUT | 5 | Default overall timeout in seconds for proxied requests. |
SSRF_DEFAULT_CONNECT_TIME_OUT | 5 | Default connection timeout in seconds. |
SSRF_DEFAULT_READ_TIME_OUT | 5 | Default read timeout in seconds. |
SSRF_DEFAULT_WRITE_TIME_OUT | 5 | Default write timeout in seconds. |
Docker Compose
| Variable | Default | Description |
|---|---|---|
COMPOSE_PROFILES | ${VECTOR_STORE:-weaviate},${DB_TYPE:-postgresql} | Automatically selects which service containers to start based on your database and vector store choices. For example, setting DB_TYPE=mysql starts MySQL instead of PostgreSQL. |
EXPOSE_NGINX_PORT | 80 | Host port mapped to Nginx HTTP. |
EXPOSE_NGINX_SSL_PORT | 443 | Host port mapped to Nginx HTTPS. |
ModelProvider & Tool Position Configuration
Customize which tools and model providers are available in the app interface and their display order. Use comma-separated values with no spaces between items.| Variable | Default | Description |
|---|---|---|
POSITION_TOOL_PINS | (empty) | Pin specific tools to the top of the list. Example: bing,google. |
POSITION_TOOL_INCLUDES | (empty) | Only show listed tools. If unset, all tools are available. |
POSITION_TOOL_EXCLUDES | (empty) | Hide specific tools (pinned tools are not affected). |
POSITION_PROVIDER_PINS | (empty) | Pin specific model providers to the top. Example: openai,anthropic. |
POSITION_PROVIDER_INCLUDES | (empty) | Only show listed providers. If unset, all providers are available. |
POSITION_PROVIDER_EXCLUDES | (empty) | Hide specific providers (pinned providers are not affected). |
Plugin Daemon Configuration
The plugin daemon is a separate service that manages plugin lifecycle (installation, execution, upgrades). The API communicates with it via HTTP.| Variable | Default | Description |
|---|---|---|
PLUGIN_DAEMON_URL | http://plugin_daemon:5002 | Plugin daemon service URL. |
PLUGIN_DAEMON_KEY | (auto-generated) | Authentication key for the plugin daemon. |
PLUGIN_DAEMON_PORT | 5002 | Plugin daemon listening port. |
PLUGIN_DAEMON_TIMEOUT | 600.0 | Timeout in seconds for all plugin daemon requests (installation, execution, listing). |
PLUGIN_MAX_PACKAGE_SIZE | 52428800 | Maximum plugin package size in bytes (50 MB). Validated during marketplace downloads. |
PLUGIN_MODEL_SCHEMA_CACHE_TTL | 3600 | How long to cache plugin model schemas in seconds. Reduces repeated lookups. |
PLUGIN_DIFY_INNER_API_KEY | (auto-generated) | API key the plugin daemon uses to call back to the Dify API. Must match DIFY_INNER_API_KEY in the plugin daemon service config. |
PLUGIN_DIFY_INNER_API_URL | http://api:5001 | Internal API URL the plugin daemon calls back to. |
PLUGIN_DEBUGGING_HOST | 0.0.0.0 | Host for plugin remote debugging connections. |
PLUGIN_DEBUGGING_PORT | 5003 | Port for plugin remote debugging connections. |
MARKETPLACE_ENABLED | true | Enable the plugin marketplace. When disabled, only locally installed plugins are available—browsing and auto-upgrades are unavailable. |
MARKETPLACE_API_URL | https://marketplace.dify.ai | Marketplace API endpoint for plugin browsing, downloading, and upgrade checking. |
FORCE_VERIFYING_SIGNATURE | true | Require valid signatures before installing plugins. Prevents installing tampered or unsigned packages. |
PLUGIN_MAX_EXECUTION_TIMEOUT | 600 | Plugin execution timeout in seconds (plugin daemon side). Should match PLUGIN_DAEMON_TIMEOUT on the API side. |
PIP_MIRROR_URL | (empty) | Custom PyPI mirror URL used by the plugin daemon when installing plugin dependencies. Useful for faster installs or air-gapped environments. |
OTLP / OpenTelemetry Configuration
OpenTelemetry provides distributed tracing and metrics collection. When enabled, Dify instruments Flask and exports telemetry data to an OTLP collector.| Variable | Default | Description |
|---|---|---|
ENABLE_OTEL | false | Master switch for OpenTelemetry instrumentation. |
OTLP_TRACE_ENDPOINT | (empty) | Dedicated trace endpoint URL. If unset, falls back to {OTLP_BASE_ENDPOINT}/v1/traces. |
OTLP_METRIC_ENDPOINT | (empty) | Dedicated metric endpoint URL. If unset, falls back to {OTLP_BASE_ENDPOINT}/v1/metrics. |
OTLP_BASE_ENDPOINT | http://localhost:4318 | Base OTLP collector URL. Used as fallback when specific trace/metric endpoints are not set. |
OTLP_API_KEY | (empty) | API key for OTLP authentication. Sent as Authorization: Bearer header. |
OTEL_EXPORTER_TYPE | otlp | Exporter type. otlp exports to a collector; other values use a console exporter (for debugging). |
OTEL_EXPORTER_OTLP_PROTOCOL | (empty) | Protocol for OTLP export. grpc uses gRPC exporters; anything else uses HTTP. |
OTEL_SAMPLING_RATE | 0.1 | Fraction of requests to trace (0.1 = 10%). Lower values reduce overhead in high-traffic production environments. |
OTEL_BATCH_EXPORT_SCHEDULE_DELAY | 5000 | Delay in milliseconds between batch exports. |
OTEL_MAX_QUEUE_SIZE | 2048 | Maximum number of spans queued before dropping. |
OTEL_MAX_EXPORT_BATCH_SIZE | 512 | Maximum spans per export batch. |
OTEL_METRIC_EXPORT_INTERVAL | 60000 | Metric export interval in milliseconds. |
OTEL_BATCH_EXPORT_TIMEOUT | 10000 | Batch span export timeout in milliseconds. |
OTEL_METRIC_EXPORT_TIMEOUT | 30000 | Metric export timeout in milliseconds. |
Miscellaneous
| Variable | Default | Description |
|---|---|---|
CSP_WHITELIST | (empty) | Additional domains to allow in Content Security Policy headers. |
ALLOW_EMBED | false | Allow Dify pages to be embedded in iframes. When false, sets X-Frame-Options: DENY to prevent clickjacking. |
SWAGGER_UI_ENABLED | false | Expose Swagger UI at SWAGGER_UI_PATH for browsing API documentation. Swagger endpoints bypass authentication. |
SWAGGER_UI_PATH | /swagger-ui.html | URL path for Swagger UI. |
MAX_SUBMIT_COUNT | 100 | Maximum concurrent task submissions in the thread pool used for parallel workflow node execution. |
TENANT_ISOLATED_TASK_CONCURRENCY | 1 | Number of document indexing or RAG pipeline tasks processed simultaneously per tenant. Increase for faster indexing with more database load. |
Scheduled Tasks Configuration
Dify uses Celery Beat to run background maintenance tasks on configurable schedules.| Variable | Default | Description |
|---|---|---|
ENABLE_CLEAN_EMBEDDING_CACHE_TASK | false | Delete expired embedding cache records from the database at 2:00 AM daily. Manages database size. |
ENABLE_CLEAN_UNUSED_DATASETS_TASK | false | Disable documents in knowledge bases that haven’t had activity within the retention period. Runs at 3:00 AM daily. |
ENABLE_CLEAN_MESSAGES | false | Delete conversation messages older than the retention period at 4:00 AM daily. |
ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK | false | Email workspace owners a list of knowledge bases that had documents auto-disabled by the cleanup task. Runs every Monday at 10:00 AM. |
ENABLE_DATASETS_QUEUE_MONITOR | false | Monitor the dataset processing queue backlog in Redis. Sends email alerts when the queue exceeds the threshold. |
QUEUE_MONITOR_INTERVAL | 30 | How often to check the queue (in minutes). |
QUEUE_MONITOR_THRESHOLD | 200 | Queue size that triggers an alert email. |
QUEUE_MONITOR_ALERT_EMAILS | (empty) | Email addresses to receive queue alerts (comma-separated). |
ENABLE_CHECK_UPGRADABLE_PLUGIN_TASK | true | Check the marketplace for newer plugin versions every 15 minutes. Dispatches upgrade tasks based on each tenant’s auto-upgrade schedule. |
ENABLE_WORKFLOW_SCHEDULE_POLLER_TASK | true | Enable the workflow schedule poller that checks for and triggers scheduled workflow runs. |
WORKFLOW_SCHEDULE_POLLER_INTERVAL | 1 | How often to check for due scheduled workflows (in minutes). |
WORKFLOW_SCHEDULE_POLLER_BATCH_SIZE | 100 | Maximum number of due schedules fetched per poll cycle. |
WORKFLOW_SCHEDULE_MAX_DISPATCH_PER_TICK | 0 | Circuit breaker: maximum schedules dispatched per tick. 0 means unlimited. |
ENABLE_WORKFLOW_RUN_CLEANUP_TASK | false | Enable automatic cleanup of workflow run records. |
ENABLE_CREATE_TIDB_SERVERLESS_TASK | false | Pre-create TiDB Serverless clusters for vector database pooling. |
ENABLE_UPDATE_TIDB_SERVERLESS_STATUS_TASK | false | Update TiDB Serverless cluster status periodically. |
ENABLE_HUMAN_INPUT_TIMEOUT_TASK | true | Check for expired Human Input forms and resume or stop timed-out workflows. |
HUMAN_INPUT_TIMEOUT_TASK_INTERVAL | 1 | How often to check for expired Human Input forms (in minutes). |
Record Retention & Cleanup
These control how old records are cleaned up. WhenBILLING_ENABLED is active, cleanup targets sandbox-tier tenants with a grace period. When billing is disabled (self-hosted), cleanup applies to all records within the retention window.
| Variable | Default | Description |
|---|---|---|
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS | 30 | Records older than this many days are eligible for deletion. |
SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD | 21 | Grace period in days after subscription expiration before records are deleted (billing-enabled only). |
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE | 1000 | Number of records processed per cleanup batch. |
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_MAX_INTERVAL | 200 | Maximum random delay in milliseconds between cleanup batches to reduce database load. |
SANDBOX_EXPIRED_RECORDS_CLEAN_TASK_LOCK_TTL | 90000 | Redis lock TTL in seconds (~25 hours) to prevent concurrent cleanup task execution. |
Aliyun SLS Logstore Configuration
Optional integration with Aliyun Simple Log Service for storing workflow execution logs externally instead of in the database. Enable by setting the repository configuration variables to use logstore implementations.| Variable | Default | Description |
|---|---|---|
ALIYUN_SLS_ACCESS_KEY_ID | (empty) | Aliyun access key ID for SLS authentication. |
ALIYUN_SLS_ACCESS_KEY_SECRET | (empty) | Aliyun access key secret for SLS authentication. |
ALIYUN_SLS_ENDPOINT | (empty) | SLS service endpoint URL (e.g., cn-hangzhou.log.aliyuncs.com). |
ALIYUN_SLS_REGION | (empty) | Aliyun region (e.g., cn-hangzhou). |
ALIYUN_SLS_PROJECT_NAME | (empty) | SLS project name for storing workflow logs. |
ALIYUN_SLS_LOGSTORE_TTL | 365 | Data retention in days for SLS logstores. Use 3650 for permanent storage. |
LOGSTORE_DUAL_WRITE_ENABLED | false | Write workflow data to both SLS and PostgreSQL simultaneously. Useful during migration to SLS. |
LOGSTORE_DUAL_READ_ENABLED | true | Fall back to PostgreSQL when SLS returns no results. Useful during migration when historical data exists only in the database. |
LOGSTORE_ENABLE_PUT_GRAPH_FIELD | true | Include the full workflow graph definition in SLS logs. Set to false to reduce storage by omitting large graph data. |
Event Bus Configuration
Redis-based event transport between API and Celery workers.| Variable | Default | Description |
|---|---|---|
EVENT_BUS_REDIS_URL | (empty) | Redis connection URL for event streaming. When empty, uses the main Redis connection settings. |
EVENT_BUS_REDIS_CHANNEL_TYPE | pubsub | Transport type: pubsub (Pub/Sub, at-most-once delivery), sharded (sharded Pub/Sub), or streams (Redis Streams, at-least-once delivery). |
EVENT_BUS_REDIS_USE_CLUSTERS | false | Enable Redis Cluster mode for event bus. Recommended for large deployments. |
Vector Database Service Configuration
These configure the vector database containers themselves (not the Dify client connection). Only the variables for your chosenVECTOR_STORE are relevant.
Weaviate Service
Weaviate Service
| Variable | Default | Description |
|---|---|---|
WEAVIATE_PERSISTENCE_DATA_PATH | /var/lib/weaviate | Data persistence directory inside the container. |
WEAVIATE_QUERY_DEFAULTS_LIMIT | 25 | Default query result limit. |
WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED | true | Allow anonymous access. |
WEAVIATE_DEFAULT_VECTORIZER_MODULE | none | Default vectorizer module. |
WEAVIATE_CLUSTER_HOSTNAME | node1 | Cluster node hostname. |
WEAVIATE_AUTHENTICATION_APIKEY_ENABLED | true | Enable API key authentication. |
WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS | (auto-generated) | Allowed API keys. Must match WEAVIATE_API_KEY in the client config. |
WEAVIATE_AUTHENTICATION_APIKEY_USERS | hello@dify.ai | Users associated with API keys. |
WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED | true | Enable admin list authorization. |
WEAVIATE_AUTHORIZATION_ADMINLIST_USERS | hello@dify.ai | Admin users. |
WEAVIATE_DISABLE_TELEMETRY | false | Disable Weaviate telemetry. |
WEAVIATE_ENABLE_TOKENIZER_GSE | false | Enable GSE tokenizer (Chinese). |
WEAVIATE_ENABLE_TOKENIZER_KAGOME_JA | false | Enable Kagome tokenizer (Japanese). |
WEAVIATE_ENABLE_TOKENIZER_KAGOME_KR | false | Enable Kagome tokenizer (Korean). |
Milvus Service (ETCD + MinIO)
Milvus Service (ETCD + MinIO)
| Variable | Default | Description |
|---|---|---|
ETCD_AUTO_COMPACTION_MODE | revision | ETCD auto compaction mode. |
ETCD_AUTO_COMPACTION_RETENTION | 1000 | Auto compaction retention in number of revisions. |
ETCD_QUOTA_BACKEND_BYTES | 4294967296 | Backend quota in bytes (4 GB). |
ETCD_SNAPSHOT_COUNT | 50000 | Number of changes before triggering a snapshot. |
ETCD_ENDPOINTS | etcd:2379 | ETCD service endpoints. |
MINIO_ACCESS_KEY | minioadmin | MinIO access key. |
MINIO_SECRET_KEY | minioadmin | MinIO secret key. |
MINIO_ADDRESS | minio:9000 | MinIO service address. |
MILVUS_AUTHORIZATION_ENABLED | true | Enable Milvus security authorization. |
OpenSearch Service
OpenSearch Service
| Variable | Default | Description |
|---|---|---|
OPENSEARCH_DISCOVERY_TYPE | single-node | Discovery type for cluster formation. |
OPENSEARCH_BOOTSTRAP_MEMORY_LOCK | true | Lock memory on startup to prevent swapping. |
OPENSEARCH_JAVA_OPTS_MIN | 512m | Minimum JVM heap size. |
OPENSEARCH_JAVA_OPTS_MAX | 1024m | Maximum JVM heap size. |
OPENSEARCH_INITIAL_ADMIN_PASSWORD | Qazwsxedc!@#123 | Initial admin password for the OpenSearch service. |
OPENSEARCH_MEMLOCK_SOFT | -1 | Soft memory lock limit (-1 = unlimited). |
OPENSEARCH_MEMLOCK_HARD | -1 | Hard memory lock limit (-1 = unlimited). |
OPENSEARCH_NOFILE_SOFT | 65536 | Soft file descriptor limit. |
OPENSEARCH_NOFILE_HARD | 65536 | Hard file descriptor limit. |
PGVector / PGVecto.RS Service
PGVector / PGVecto.RS Service
| Variable | Default | Description |
|---|---|---|
PGVECTOR_PGUSER | postgres | PostgreSQL user for the PGVector container. |
PGVECTOR_POSTGRES_PASSWORD | (auto-generated) | PostgreSQL password for the PGVector container. |
PGVECTOR_POSTGRES_DB | dify | Database name in the PGVector container. |
PGVECTOR_PGDATA | /var/lib/postgresql/data/pgdata | Data directory inside the container. |
PGVECTOR_PG_BIGM_VERSION | 1.2-20240606 | Version of the pg_bigm extension. |
Oracle / Chroma / Elasticsearch Services
Oracle / Chroma / Elasticsearch Services
| Variable | Default | Description |
|---|---|---|
ORACLE_PWD | Dify123456 | Oracle database password for the container. |
ORACLE_CHARACTERSET | AL32UTF8 | Oracle character set. |
CHROMA_SERVER_AUTHN_CREDENTIALS | (auto-generated) | Authentication credentials for the Chroma server container. |
CHROMA_SERVER_AUTHN_PROVIDER | chromadb.auth.token_authn.TokenAuthenticationServerProvider | Authentication provider for the Chroma server. |
CHROMA_IS_PERSISTENT | TRUE | Enable persistent storage for Chroma. |
KIBANA_PORT | 5601 | Kibana port (Elasticsearch UI). |
IRIS / Other Services
IRIS / Other Services
| Variable | Default | Description |
|---|---|---|
IRIS_WEB_SERVER_PORT | 52773 | IRIS web server management port. |
IRIS_TIMEZONE | UTC | Timezone for the IRIS container. |
DB_PLUGIN_DATABASE | dify_plugin | Separate database name for plugin data. |
Plugin Daemon Storage Configuration
The plugin daemon can store plugin packages in different storage backends. Configure only the provider matchingPLUGIN_STORAGE_TYPE.
| Variable | Default | Description |
|---|---|---|
PLUGIN_STORAGE_TYPE | local | Plugin storage backend: local, aws_s3, tencent_cos, azure_blob, aliyun_oss, volcengine_tos. |
PLUGIN_STORAGE_LOCAL_ROOT | /app/storage | Root directory for local plugin storage. |
PLUGIN_WORKING_PATH | /app/storage/cwd | Working directory for plugin execution. |
PLUGIN_INSTALLED_PATH | plugin | Subdirectory for installed plugins. |
PLUGIN_PACKAGE_CACHE_PATH | plugin_packages | Subdirectory for cached plugin packages. |
PLUGIN_MEDIA_CACHE_PATH | assets | Subdirectory for cached media assets. |
PLUGIN_STORAGE_OSS_BUCKET | (empty) | Object storage bucket name (shared across S3/COS/OSS/TOS providers). |
PLUGIN_PPROF_ENABLED | false | Enable Go pprof profiling for the plugin daemon. |
PLUGIN_PYTHON_ENV_INIT_TIMEOUT | 120 | Timeout in seconds for initializing Python environments for plugins. |
PLUGIN_STDIO_BUFFER_SIZE | 1024 | Buffer size in bytes for plugin stdio communication. |
PLUGIN_STDIO_MAX_BUFFER_SIZE | 5242880 | Maximum buffer size in bytes (5 MB) for plugin stdio communication. |
ENFORCE_LANGGENIUS_PLUGIN_SIGNATURES | true | Enforce signature verification for LangGenius official plugins. |
ENDPOINT_URL_TEMPLATE | http://localhost/e/{hook_id} | URL template for plugin endpoints. {hook_id} is replaced with the actual hook ID. |
EXPOSE_PLUGIN_DAEMON_PORT | 5002 | Host port mapped to the plugin daemon. |
EXPOSE_PLUGIN_DEBUGGING_HOST | localhost | Host for plugin remote debugging. |
EXPOSE_PLUGIN_DEBUGGING_PORT | 5003 | Host port for plugin remote debugging. |
Plugin S3 Storage
Plugin S3 Storage
| Variable | Default | Description |
|---|---|---|
PLUGIN_S3_USE_AWS | false | Use AWS S3 (vs S3-compatible services). |
PLUGIN_S3_USE_AWS_MANAGED_IAM | false | Use IAM roles instead of explicit credentials. |
PLUGIN_S3_ENDPOINT | (empty) | S3 endpoint URL. |
PLUGIN_S3_USE_PATH_STYLE | false | Use path-style URLs instead of virtual-hosted. |
PLUGIN_AWS_ACCESS_KEY | (empty) | AWS access key. |
PLUGIN_AWS_SECRET_KEY | (empty) | AWS secret key. |
PLUGIN_AWS_REGION | (empty) | AWS region. |
Plugin Azure Blob Storage
Plugin Azure Blob Storage
| Variable | Default | Description |
|---|---|---|
PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME | (empty) | Azure Blob container name. |
PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING | (empty) | Azure Blob connection string. |
Plugin Tencent COS Storage
Plugin Tencent COS Storage
| Variable | Default | Description |
|---|---|---|
PLUGIN_TENCENT_COS_SECRET_KEY | (empty) | Tencent COS secret key. |
PLUGIN_TENCENT_COS_SECRET_ID | (empty) | Tencent COS secret ID. |
PLUGIN_TENCENT_COS_REGION | (empty) | Tencent COS region. |
Plugin Aliyun OSS Storage
Plugin Aliyun OSS Storage
| Variable | Default | Description |
|---|---|---|
PLUGIN_ALIYUN_OSS_REGION | (empty) | Aliyun OSS region. |
PLUGIN_ALIYUN_OSS_ENDPOINT | (empty) | Aliyun OSS endpoint. |
PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID | (empty) | Aliyun OSS access key ID. |
PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET | (empty) | Aliyun OSS access key secret. |
PLUGIN_ALIYUN_OSS_AUTH_VERSION | v4 | Aliyun OSS authentication version. |
PLUGIN_ALIYUN_OSS_PATH | (empty) | Aliyun OSS path prefix. |
Plugin Volcengine TOS Storage
Plugin Volcengine TOS Storage
| Variable | Default | Description |
|---|---|---|
PLUGIN_VOLCENGINE_TOS_ENDPOINT | (empty) | Volcengine TOS endpoint. |
PLUGIN_VOLCENGINE_TOS_ACCESS_KEY | (empty) | Volcengine TOS access key. |
PLUGIN_VOLCENGINE_TOS_SECRET_KEY | (empty) | Volcengine TOS secret key. |
PLUGIN_VOLCENGINE_TOS_REGION | (empty) | Volcengine TOS region. |