Skip to main content
Dify works out of the box with default settings. You can customize your deployment by modifying the environment variables in the .env file.
After upgrading Dify, run diff .env .env.example in the docker directory to check for newly added or changed variables, then update your .env file accordingly.

Common Variables

These URL variables configure the addresses of Dify’s various services. For single-domain deployments behind Nginx (the default Docker Compose setup), these can be left empty—the system auto-detects from the incoming request. Configure them when using custom domains, split-domain deployments, or a reverse proxy.

CONSOLE_API_URL

Default: (empty) The public URL of Dify’s backend API. Set this if you use OAuth login (GitHub, Google), Notion integration, or any plugin that requires OAuth—these features need an absolute callback URL to redirect users back after authorization. Also determines whether secure (HTTPS-only) cookies are used. Example: https://api.console.dify.ai

CONSOLE_WEB_URL

Default: (empty) The public URL of Dify’s console frontend. Used to build links in all system emails (invitations, password resets, notifications) and to redirect users back to the console after OAuth login. Also serves as the default CORS allowed origin if CONSOLE_CORS_ALLOW_ORIGINS is not set. If empty, email links will be broken—even in single-domain setups, set this if you use email features. Example: https://console.dify.ai

SERVICE_API_URL

Default: (empty) The API Base URL shown to developers in the Dify console—the URL they copy into their code to call the Dify API. If empty, auto-detects from the current request (e.g., http://localhost/v1). Set this to ensure a consistent URL when your server is accessible via multiple addresses. Example: https://api.dify.ai

APP_API_URL

Default: (empty) The backend API URL for the WebApp frontend (published apps). This variable is only used by the web frontend container, not the Python backend. If empty, the Docker image defaults to http://127.0.0.1:5001. Example: https://api.app.dify.ai

APP_WEB_URL

Default: (empty) The public URL where published WebApps are accessible. Required for the Human Input node in workflows—form links in email notifications are built as {APP_WEB_URL}/form/{token}. If empty, Human Input email delivery will not include valid form links. Example: https://app.dify.ai

TRIGGER_URL

Default: http://localhost The publicly accessible URL for webhook and plugin trigger endpoints. External systems use this address to invoke your workflows. Dify builds trigger callback URLs like {TRIGGER_URL}/triggers/webhook/{id} and displays them in the console. For triggers to work from external systems, this must point to a public domain or IP address they can reach.

FILES_URL

Default: (empty; falls back to CONSOLE_API_URL) The base URL for file preview and download links. Dify generates signed, time-limited URLs for all files (uploaded documents, tool outputs, workspace logos) and serves them to the frontend and multi-modal models. Set this if you use file processing plugins, or if you want file URLs on a dedicated domain. If both FILES_URL and CONSOLE_API_URL are empty, file previews will not work. Example: https://upload.example.com or http://<your-ip>:5001

INTERNAL_FILES_URL

Default: (empty; falls back to FILES_URL) The file access URL used for communication between services inside the Docker network (e.g., plugin daemon, PDF/Word extractors). These internal services may not be able to reach the external FILES_URL if it routes through Nginx or a public domain. If empty, internal services use FILES_URL. Set this when internal services can’t reach the external URL. Example: http://api:5001

FILES_ACCESS_TIMEOUT

Default: 300 (5 minutes) How long signed file URLs remain valid, in seconds. After this time, the URL is rejected and the file must be re-requested. Increase for long-running processes; decrease for tighter security.

System Encoding

VariableDefaultDescription
LANGC.UTF-8System locale setting. Ensures UTF-8 encoding.
LC_ALLC.UTF-8Locale override for all categories.
PYTHONIOENCODINGutf-8Python I/O encoding.
UV_CACHE_DIR/tmp/.uv-cacheUV package manager cache directory. Avoids permission issues with non-existent home directories.

Server Configuration

Logging

VariableDefaultDescription
LOG_LEVELINFOMinimum log severity. Controls what gets logged across all handlers (file + console). Levels from least to most severe: DEBUG, INFO, WARNING, ERROR, CRITICAL.
LOG_OUTPUT_FORMATtexttext produces human-readable lines with timestamp, level, thread, and trace ID. json produces structured JSON for log aggregation tools (ELK, Datadog, etc.).
LOG_FILE/app/logs/server.logLog file path. When set, enables file-based logging with automatic rotation. The directory is created automatically. When empty, logs only go to console.
LOG_FILE_MAX_SIZE20Maximum log file size in MB before rotation. When exceeded, the active file is renamed to .1 and a new file is started.
LOG_FILE_BACKUP_COUNT5Number of rotated log files to keep. With defaults, at most 6 files exist: the active file plus 5 backups.
LOG_DATEFORMAT%Y-%m-%d %H:%M:%STimestamp format for text-format logs (strftime codes). Ignored by JSON format.
LOG_TZUTCTimezone for log timestamps (pytz format, e.g., Asia/Shanghai). Only applies to text format—JSON always uses UTC. Also sets Celery’s task scheduling timezone.

General

VariableDefaultDescription
DEBUGfalseEnables verbose logging: workflow node inputs/outputs, tool execution details, full LLM prompts and responses, and app startup timing. Useful for local development; not recommended for production as it may expose sensitive data in logs.
FLASK_DEBUGfalseStandard Flask debug mode flag. Not actively used by Dify—DEBUG is the primary control.
ENABLE_REQUEST_LOGGINGfalseLogs a compact access line (METHOD PATH STATUS DURATION TRACE_ID) for every HTTP request. When LOG_LEVEL is also set to DEBUG, additionally logs full request and response bodies as JSON.
DEPLOY_ENVPRODUCTIONTags monitoring data in Sentry and OpenTelemetry so you can filter errors and traces by environment. Also sent as the X-Env response header. Does not change application behavior.
MIGRATION_ENABLEDtrueWhen true, runs database schema migrations (flask upgrade-db) automatically on container startup. Docker only. Set to false if you run migrations separately. For source code launches, run flask db upgrade manually.
CHECK_UPDATE_URLhttps://updates.dify.aiThe console checks this URL for newer Dify versions. Set to empty to disable—useful for air-gapped environments or to prevent external HTTP calls.
OPENAI_API_BASEhttps://api.openai.com/v1Legacy variable. Not actively used by Dify’s own code. May be picked up by the OpenAI Python SDK if present in the environment.

SECRET_KEY

Default: (pre-filled in .env.example; must be replaced for production) Used for session cookie signing, JWT authentication tokens, file URL signatures (HMAC-SHA256), and encrypting third-party OAuth credentials (AES-256). Generate a strong key before first launch:
openssl rand -base64 42
Changing this key after deployment will immediately log out all users, invalidate all file URLs, and break any plugin integrations that use OAuth—their encrypted credentials become unrecoverable.

INIT_PASSWORD

Default: (empty) Optional security gate for first-time setup. When set, the /install page requires this password before the admin account can be created—preventing unauthorized setup if your server is exposed. Once setup is complete, this variable has no further effect. Maximum length: 30 characters.

Token & Request Limits

VariableDefaultDescription
ACCESS_TOKEN_EXPIRE_MINUTES60How long a login session’s access token stays valid (in minutes). When it expires, the browser silently refreshes it using the refresh token—users are not logged out.
REFRESH_TOKEN_EXPIRE_DAYS30How long a user can stay logged in without re-entering credentials (in days). If the user doesn’t visit within this period, they must log in again.
APP_MAX_EXECUTION_TIME1200Maximum time (in seconds) an app execution can run before being terminated. Works alongside WORKFLOW_MAX_EXECUTION_TIME—both enforce the same default of 20 minutes, but this one applies at the app queue level while the other applies at the workflow engine level. Increase both if your workflows need more time.
APP_DEFAULT_ACTIVE_REQUESTS0Default concurrent request limit per app, used when an app doesn’t have a custom limit set in the UI. 0 means unlimited. The effective limit is the smaller of this and APP_MAX_ACTIVE_REQUESTS.
APP_MAX_ACTIVE_REQUESTS0Global ceiling for concurrent requests per app. Overrides per-app settings if they exceed this value. 0 means unlimited.

Container Startup Configuration

Only effective when starting with Docker image or Docker Compose.
VariableDefaultDescription
DIFY_BIND_ADDRESS0.0.0.0Network interface the API server binds to. 0.0.0.0 listens on all interfaces; set to 127.0.0.1 to restrict to localhost only.
DIFY_PORT5001Port the API server listens on.
SERVER_WORKER_AMOUNT1Number of Gunicorn worker processes. With gevent (default), each worker handles multiple concurrent connections via greenlets, so 1 is usually sufficient. For sync workers, use (2 x CPU cores) + 1. Reference.
SERVER_WORKER_CLASSgeventGunicorn worker type. Gevent provides lightweight async concurrency. Changing this breaks psycopg2 and gRPC patching—it is strongly discouraged.
SERVER_WORKER_CONNECTIONS10Maximum concurrent connections per worker. Only applies to async workers (gevent). If you experience connection rejections or slow responses under load, try increasing this value.
GUNICORN_TIMEOUT360If a worker doesn’t respond within this many seconds, Gunicorn kills and restarts it. Set to 360 (6 minutes) to support long-lived SSE connections used for streaming LLM responses.
CELERY_WORKER_CLASS(empty; defaults to gevent)Celery worker type. Same gevent patching requirements as SERVER_WORKER_CLASS—it is strongly discouraged to change.
CELERY_WORKER_AMOUNT(empty; defaults to 1)Number of Celery worker processes. Only used when autoscaling is disabled.
CELERY_AUTO_SCALEfalseEnable dynamic autoscaling. When enabled, Celery monitors queue depth and spawns/kills workers between CELERY_MIN_WORKERS and CELERY_MAX_WORKERS.
CELERY_MAX_WORKERS(empty; defaults to CPU count)Maximum workers when autoscaling is enabled.
CELERY_MIN_WORKERS(empty; defaults to 1)Minimum workers when autoscaling is enabled.

API Tool Configuration

VariableDefaultDescription
API_TOOL_DEFAULT_CONNECT_TIMEOUT10Maximum time (in seconds) to wait for establishing a TCP connection when API Tool nodes call external APIs.
API_TOOL_DEFAULT_READ_TIMEOUT60Maximum time (in seconds) to wait for receiving response data from external APIs called by API Tool nodes.

Database Configuration

The database uses PostgreSQL by default. OceanBase, MySQL, and seekdb are also supported.
VariableDefaultDescription
DB_TYPEpostgresqlDatabase type. Supported values: postgresql, mysql, oceanbase, seekdb. MySQL-compatible databases like TiDB can use mysql.
DB_USERNAMEpostgresDatabase username. URL-encoded in the connection string, so special characters are safe to use.
DB_PASSWORDdifyai123456Database password. URL-encoded in the connection string, so characters like @, :, % are safe to use.
DB_HOSTdb_postgresDatabase server hostname.
DB_PORT5432Database server port. If using MySQL, set this to 3306.
DB_DATABASEdifyDatabase name.

Connection Pool

These control how Dify manages its pool of database connections. The defaults work well for most deployments.
VariableDefaultDescription
SQLALCHEMY_POOL_SIZE30Number of persistent connections kept in the pool.
SQLALCHEMY_MAX_OVERFLOW10Additional temporary connections allowed when the pool is full. With default settings, up to 40 connections (30 + 10) can exist simultaneously.
SQLALCHEMY_POOL_RECYCLE3600Recycle connections after this many seconds to prevent stale connections.
SQLALCHEMY_POOL_TIMEOUT30How long to wait for a connection when the pool is exhausted. Requests fail with a timeout error if no connection frees up in time.
SQLALCHEMY_POOL_PRE_PINGfalseTest each connection with a lightweight query before using it. Prevents “connection lost” errors but adds slight latency. Recommended for production with unreliable networks.
SQLALCHEMY_POOL_USE_LIFOfalseReuse the most recently returned connection (LIFO) instead of rotating evenly (FIFO). LIFO keeps fewer connections “warm” and can reduce overhead.
SQLALCHEMY_ECHOfalsePrint all SQL statements to logs. Useful for debugging query issues.

PostgreSQL Performance Tuning

These are passed as startup arguments to the PostgreSQL container—they configure the database server, not the Dify application.
VariableDefaultDescription
POSTGRES_MAX_CONNECTIONS100Maximum number of database connections. Reference
POSTGRES_SHARED_BUFFERS128MBShared memory for buffers. Recommended: 25% of available memory. Reference
POSTGRES_WORK_MEM4MBMemory per database worker for working space. Reference
POSTGRES_MAINTENANCE_WORK_MEM64MBMemory reserved for maintenance activities. Reference
POSTGRES_EFFECTIVE_CACHE_SIZE4096MBPlanner’s assumption about effective cache size. Reference
POSTGRES_STATEMENT_TIMEOUT0Max statement duration before termination (ms). 0 means no timeout. Reference
POSTGRES_IDLE_IN_TRANSACTION_SESSION_TIMEOUT0Max idle-in-transaction session duration (ms). 0 means no timeout. Reference

MySQL Performance Tuning

These are passed as startup arguments to the MySQL container—they configure the database server, not the Dify application.
VariableDefaultDescription
MYSQL_MAX_CONNECTIONS1000Maximum number of MySQL connections.
MYSQL_INNODB_BUFFER_POOL_SIZE512MInnoDB buffer pool size. Recommended: 70-80% of available memory for dedicated MySQL server. Reference
MYSQL_INNODB_LOG_FILE_SIZE128MInnoDB log file size. Reference
MYSQL_INNODB_FLUSH_LOG_AT_TRX_COMMIT2InnoDB flush log at transaction commit. Options: 0 (no flush), 1 (flush and sync), 2 (flush to OS cache). Reference

Redis Configuration

Configure these to connect Dify to your Redis instance. Dify supports three deployment modes: standalone (default), Sentinel, and Cluster.
VariableDefaultDescription
REDIS_HOSTredisRedis server hostname. Only used in standalone mode; ignored when Sentinel or Cluster mode is enabled.
REDIS_PORT6379Redis server port. Only used in standalone mode.
REDIS_USERNAME(empty)Redis 6.0+ ACL username. Applies to all modes (standalone, Sentinel, Cluster).
REDIS_PASSWORDdifyai123456Redis authentication password. For Cluster mode, use REDIS_CLUSTERS_PASSWORD instead.
REDIS_DB0Redis database number (0–15). Only applies to standalone and Sentinel modes. Make sure this doesn’t collide with Celery’s database (configured in CELERY_BROKER_URL; default is DB 1).
REDIS_USE_SSLfalseEnable SSL/TLS for the Redis connection. Does not automatically apply to Sentinel protocol.
REDIS_MAX_CONNECTIONS(empty)Maximum connections in the Redis pool. Leave unset for the library default. Set this to match your Redis server’s maxclients if needed.

Redis SSL Configuration

Only applies when REDIS_USE_SSL=true. These same settings are also used by the Celery broker when its URL uses the rediss:// scheme.
VariableDefaultDescription
REDIS_SSL_CERT_REQSCERT_NONECertificate verification level: CERT_NONE (no verification), CERT_OPTIONAL, or CERT_REQUIRED (full verification).
REDIS_SSL_CA_CERTS(empty)Path to CA certificate file for verifying the Redis server.
REDIS_SSL_CERTFILE(empty)Path to client certificate for mutual TLS authentication.
REDIS_SSL_KEYFILE(empty)Path to client private key for mutual TLS authentication.

Redis Sentinel Mode

Sentinel provides automatic master discovery and failover for high availability. Mutually exclusive with Cluster mode.
VariableDefaultDescription
REDIS_USE_SENTINELfalseEnable Redis Sentinel mode. When enabled, REDIS_HOST/REDIS_PORT are ignored; Dify connects to Sentinel nodes instead and asks for the current master.
REDIS_SENTINELS(empty)Sentinel node addresses. Format: <ip1>:<port1>,<ip2>:<port2>,<ip3>:<port3>. These are the Sentinel instances, not the Redis servers.
REDIS_SENTINEL_SERVICE_NAME(empty)The logical service name Sentinel monitors (configured in sentinel.conf). Dify calls master_for(service_name) to discover the current master.
REDIS_SENTINEL_USERNAME(empty)Username for authenticating with Sentinel nodes. Separate from REDIS_USERNAME, which authenticates with the Redis master/replicas.
REDIS_SENTINEL_PASSWORD(empty)Password for authenticating with Sentinel nodes. Separate from REDIS_PASSWORD.
REDIS_SENTINEL_SOCKET_TIMEOUT0.1Socket timeout (in seconds) for communicating with Sentinel nodes. Default 0.1s assumes fast local network. For cloud/WAN deployments, increase to 1.0–5.0s to prevent intermittent timeouts.

Redis Cluster Mode

Cluster mode provides automatic sharding across multiple Redis nodes. Mutually exclusive with Sentinel mode.
VariableDefaultDescription
REDIS_USE_CLUSTERSfalseEnable Redis Cluster mode.
REDIS_CLUSTERS(empty)Cluster nodes. Format: <ip1>:<port1>,<ip2>:<port2>,<ip3>:<port3>
REDIS_CLUSTERS_PASSWORD(empty)Password for the Redis Cluster.

Celery Configuration

Configure the background task queue used for dataset indexing, email sending, and scheduled jobs.

CELERY_BROKER_URL

Default: redis://:difyai123456@redis:6379/1 Redis connection URL for the Celery message broker. Direct connection format:
redis://<redis_username>:<redis_password>@<redis_host>:<redis_port>/<redis_database>
Sentinel mode format (separate multiple nodes with semicolons):
sentinel://<redis_username>:<redis_password>@<sentinel_host>:<sentinel_port>/<redis_database>
VariableDefaultDescription
CELERY_BACKENDredisWhere Celery stores task results. Options: redis (fast, in-memory) or database (stores in your main database).
BROKER_USE_SSLfalseAuto-enabled when CELERY_BROKER_URL uses rediss:// scheme. Applies the Redis SSL certificate settings to the broker connection.
CELERY_USE_SENTINELfalseEnable Redis Sentinel mode for the Celery broker.
CELERY_SENTINEL_MASTER_NAME(empty)Sentinel service name (Master Name).
CELERY_SENTINEL_PASSWORD(empty)Password for Sentinel authentication. Separate from REDIS_SENTINEL_PASSWORD—they can differ if you use different Sentinel clusters for caching vs task queuing.
CELERY_SENTINEL_SOCKET_TIMEOUT0.1Timeout for connecting to Sentinel in seconds.
CELERY_TASK_ANNOTATIONSnullApply runtime settings to specific tasks (e.g., rate limits). Format: JSON dictionary. Example: {"tasks.add": {"rate_limit": "10/s"}}. Most users don’t need this.

CORS Configuration

Controls cross-domain access policies for the frontend.
VariableDefaultDescription
WEB_API_CORS_ALLOW_ORIGINS*Allowed origins for cross-origin requests to the Web API. Example: https://dify.app
CONSOLE_CORS_ALLOW_ORIGINS*Allowed origins for cross-origin requests to the console API. If not set, falls back to CONSOLE_WEB_URL.
COOKIE_DOMAIN(empty)Set to the shared top-level domain (e.g., example.com) when frontend and backend run on different subdomains. This allows authentication cookies to be shared across subdomains. When empty, cookies use the most secure __Host- prefix and are locked to a single domain.
NEXT_PUBLIC_COOKIE_DOMAIN(empty)Frontend flag for cross-subdomain cookies. Set to 1 (or any non-empty value) to enable—the actual domain is read from COOKIE_DOMAIN on the backend.
NEXT_PUBLIC_BATCH_CONCURRENCY5Frontend-only. Controls how many concurrent API calls the UI makes during batch operations.

File Storage Configuration

Configure where Dify stores uploaded files, dataset documents, and encryption keys. Each storage type has its own credential variables—configure only the one you’re using.

STORAGE_TYPE

Default: opendal Selects the file storage backend. Supported values: opendal, s3, azure-blob, aliyun-oss, google-storage, huawei-obs, volcengine-tos, tencent-cos, baidu-obs, oci-storage, supabase, clickzetta-volume, local (deprecated; internally uses OpenDAL with filesystem scheme).
Default storage backend using Apache OpenDAL, a unified interface supporting many storage services. Dify automatically scans environment variables matching OPENDAL_<SCHEME>_* and passes them to OpenDAL. For example, with OPENDAL_SCHEME=s3, set OPENDAL_S3_ACCESS_KEY_ID, OPENDAL_S3_SECRET_ACCESS_KEY, etc.
VariableDefaultDescription
OPENDAL_SCHEMEfsStorage service to use. Examples: fs (local filesystem), s3, gcs, azblob.
For the default fs scheme:
VariableDefaultDescription
OPENDAL_FS_ROOTstorageRoot directory for local filesystem storage. Created automatically if it doesn’t exist.
For all available schemes and their configuration options, see the OpenDAL services documentation.
VariableDefaultDescription
S3_ENDPOINT(empty)S3 endpoint address. Required for non-AWS S3-compatible services (MinIO, etc.).
S3_REGIONus-east-1S3 region.
S3_BUCKET_NAMEdifyaiS3 bucket name.
S3_ACCESS_KEY(empty)S3 Access Key. Not needed when using IAM roles.
S3_SECRET_KEY(empty)S3 Secret Key. Not needed when using IAM roles.
S3_USE_AWS_MANAGED_IAMfalseUse AWS IAM roles (EC2 instance profile, ECS task role) instead of explicit access key/secret key. When enabled, boto3 auto-discovers credentials from the instance metadata.
VariableDefaultDescription
AZURE_BLOB_ACCOUNT_NAMEdifyaiAzure storage account name.
AZURE_BLOB_ACCOUNT_KEYdifyaiAzure storage account key.
AZURE_BLOB_CONTAINER_NAMEdifyai-containerAzure Blob container name.
AZURE_BLOB_ACCOUNT_URLhttps://<your_account_name>.blob.core.windows.netAzure Blob account URL.
VariableDefaultDescription
GOOGLE_STORAGE_BUCKET_NAME(empty)Google Cloud Storage bucket name.
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64(empty)Base64-encoded service account JSON key.
VariableDefaultDescription
ALIYUN_OSS_BUCKET_NAME(empty)OSS bucket name.
ALIYUN_OSS_ACCESS_KEY(empty)OSS access key.
ALIYUN_OSS_SECRET_KEY(empty)OSS secret key.
ALIYUN_OSS_ENDPOINThttps://oss-ap-southeast-1-internal.aliyuncs.comOSS endpoint. Regions and endpoints reference.
ALIYUN_OSS_REGIONap-southeast-1OSS region.
ALIYUN_OSS_AUTH_VERSIONv4OSS authentication version.
ALIYUN_OSS_PATH(empty)Object path prefix. Don’t start with /. Reference.
ALIYUN_CLOUDBOX_ID(empty)CloudBox ID for CloudBox-based OSS deployments.
VariableDefaultDescription
TENCENT_COS_BUCKET_NAME(empty)COS bucket name.
TENCENT_COS_SECRET_KEY(empty)COS secret key.
TENCENT_COS_SECRET_ID(empty)COS secret ID.
TENCENT_COS_REGION(empty)COS region, e.g., ap-guangzhou. Reference.
TENCENT_COS_SCHEME(empty)Protocol to access COS (http or https).
TENCENT_COS_CUSTOM_DOMAIN(empty)Custom domain for COS access.
VariableDefaultDescription
OCI_ENDPOINT(empty)OCI endpoint URL.
OCI_BUCKET_NAME(empty)OCI bucket name.
OCI_ACCESS_KEY(empty)OCI access key.
OCI_SECRET_KEY(empty)OCI secret key.
OCI_REGIONus-ashburn-1OCI region.
VariableDefaultDescription
HUAWEI_OBS_BUCKET_NAME(empty)OBS bucket name.
HUAWEI_OBS_ACCESS_KEY(empty)OBS access key.
HUAWEI_OBS_SECRET_KEY(empty)OBS secret key.
HUAWEI_OBS_SERVER(empty)OBS server URL. Reference.
HUAWEI_OBS_PATH_STYLEfalseUse path-style URLs instead of virtual-hosted-style.
VariableDefaultDescription
VOLCENGINE_TOS_BUCKET_NAME(empty)TOS bucket name.
VOLCENGINE_TOS_ACCESS_KEY(empty)TOS access key.
VOLCENGINE_TOS_SECRET_KEY(empty)TOS secret key.
VOLCENGINE_TOS_ENDPOINT(empty)TOS endpoint URL. Reference.
VOLCENGINE_TOS_REGION(empty)TOS region, e.g., cn-guangzhou.
VariableDefaultDescription
BAIDU_OBS_BUCKET_NAME(empty)Baidu OBS bucket name.
BAIDU_OBS_ACCESS_KEY(empty)Baidu OBS access key.
BAIDU_OBS_SECRET_KEY(empty)Baidu OBS secret key.
BAIDU_OBS_ENDPOINT(empty)Baidu OBS server URL.
VariableDefaultDescription
SUPABASE_BUCKET_NAME(empty)Supabase storage bucket name.
SUPABASE_API_KEY(empty)Supabase API key.
SUPABASE_URL(empty)Supabase server URL.
VariableDefaultDescription
CLICKZETTA_VOLUME_TYPEuserVolume type. Options: user (personal/small team), table (enterprise multi-tenant), external (data lake integration).
CLICKZETTA_VOLUME_NAME(empty)External volume name (required only when TYPE=external).
CLICKZETTA_VOLUME_TABLE_PREFIXdataset_Table volume table prefix (used only when TYPE=table).
CLICKZETTA_VOLUME_DIFY_PREFIXdify_kmDify file directory prefix for isolation from other apps.
ClickZetta Volume reuses the CLICKZETTA_* connection parameters configured in the Vector Database section.

Archive Storage

Separate S3-compatible storage for archiving workflow run logs. Used by the paid plan retention system to archive workflow runs older than the retention period to JSONL format. Requires BILLING_ENABLED=true.
VariableDefaultDescription
ARCHIVE_STORAGE_ENABLEDfalseEnable archive storage for workflow log archival.
ARCHIVE_STORAGE_ENDPOINT(empty)S3-compatible endpoint URL.
ARCHIVE_STORAGE_ARCHIVE_BUCKET(empty)Bucket for archived workflow run logs.
ARCHIVE_STORAGE_EXPORT_BUCKET(empty)Bucket for workflow run exports.
ARCHIVE_STORAGE_ACCESS_KEY(empty)Access key.
ARCHIVE_STORAGE_SECRET_KEY(empty)Secret key.
ARCHIVE_STORAGE_REGIONautoStorage region.

Vector Database Configuration

Configure the vector database used for knowledge base embedding storage and similarity search. Each provider has its own set of credential variables—configure only the one you’re using.

VECTOR_STORE

Default: weaviate Selects the vector database backend. If a dataset already has an index, the dataset’s stored type takes precedence over this setting. When switching providers in Docker Compose, COMPOSE_PROFILES automatically starts the matching container based on this value. Supported values: weaviate, oceanbase, seekdb, qdrant, milvus, myscale, relyt, pgvector, pgvecto-rs, chroma, opensearch, oracle, tencent, elasticsearch, elasticsearch-ja, analyticdb, couchbase, vikingdb, opengauss, tablestore, vastbase, tidb, tidb_on_qdrant, baidu, lindorm, huawei_cloud, upstash, matrixone, clickzetta, alibabacloud_mysql, iris, hologres.
VariableDefaultDescription
VECTOR_INDEX_NAME_PREFIXVector_indexPrefix added to collection names in the vector database. Change this if you share a vector database instance across multiple Dify deployments.
VariableDefaultDescription
WEAVIATE_ENDPOINThttp://weaviate:8080Weaviate REST API endpoint.
WEAVIATE_API_KEY(empty)API key for Weaviate authentication.
WEAVIATE_GRPC_ENDPOINTgrpc://weaviate:50051Separate gRPC endpoint for high-performance binary protocol. Significantly faster for batch operations. Falls back to inferring from HTTP endpoint if not set.
WEAVIATE_TOKENIZATIONwordTokenization method for text fields. Options: word (splits on whitespace and punctuation), whitespace (splits on whitespace only), character (character-level, better for CJK languages).
seekdb is the lite version of OceanBase and shares the same connection configuration.
VariableDefaultDescription
OCEANBASE_VECTOR_HOSToceanbaseHostname or IP address.
OCEANBASE_VECTOR_PORT2881Port number.
OCEANBASE_VECTOR_USERroot@testDatabase username.
OCEANBASE_VECTOR_PASSWORDdifyai123456Database password.
OCEANBASE_VECTOR_DATABASEtestDatabase name.
OCEANBASE_CLUSTER_NAMEdifyaiCluster name (Docker deployment only).
OCEANBASE_MEMORY_LIMIT6GMemory limit for OceanBase (Docker deployment only).
SEEKDB_MEMORY_LIMIT2GMemory limit for seekdb (Docker deployment only).
OCEANBASE_ENABLE_HYBRID_SEARCHfalseEnable fulltext index for BM25 queries alongside vector search. Requires OceanBase >= 4.3.5.1. Collections must be recreated after enabling.
OCEANBASE_FULLTEXT_PARSERikFulltext parser. Built-in: ngram, beng, space, ngram2, ik. External (require plugin): japanese_ftparser, thai_ftparser.
VariableDefaultDescription
QDRANT_URLhttp://qdrant:6333Qdrant endpoint address.
QDRANT_API_KEYdifyai123456API key for Qdrant.
QDRANT_CLIENT_TIMEOUT20Client timeout in seconds.
QDRANT_GRPC_ENABLEDfalseEnable gRPC communication.
QDRANT_GRPC_PORT6334gRPC port.
QDRANT_REPLICATION_FACTOR1Number of replicas per shard.
VariableDefaultDescription
MILVUS_URIhttp://host.docker.internal:19530Milvus URI. For Zilliz Cloud, use the Public Endpoint.
MILVUS_DATABASE(empty)Database name.
MILVUS_TOKEN(empty)Authentication token. For Zilliz Cloud, use the API Key.
MILVUS_USER(empty)Username.
MILVUS_PASSWORD(empty)Password.
MILVUS_ENABLE_HYBRID_SEARCHfalseEnable BM25 sparse index for full-text search alongside vector similarity. Requires Milvus >= 2.5.0. If the collection was created without this enabled, it must be recreated.
MILVUS_ANALYZER_PARAMS(empty)Analyzer parameters for text fields.
VariableDefaultDescription
MYSCALE_HOSTmyscaleMyScale host.
MYSCALE_PORT8123MyScale port.
MYSCALE_USERdefaultUsername.
MYSCALE_PASSWORD(empty)Password.
MYSCALE_DATABASEdifyDatabase name.
MYSCALE_FTS_PARAMS(empty)Full-text search params. Multi-language support reference.
VariableDefaultDescription
COUCHBASE_CONNECTION_STRINGcouchbase://couchbase-serverConnection string for the Couchbase cluster.
COUCHBASE_USERAdministratorUsername.
COUCHBASE_PASSWORDpasswordPassword.
COUCHBASE_BUCKET_NAMEEmbeddingsBucket name.
COUCHBASE_SCOPE_NAME_defaultScope name.
VariableDefaultDescription
HOLOGRES_HOST(empty)Hostname.
HOLOGRES_PORT80Port number.
HOLOGRES_DATABASE(empty)Database name.
HOLOGRES_ACCESS_KEY_ID(empty)Access key ID (used as PG username).
HOLOGRES_ACCESS_KEY_SECRET(empty)Access key secret (used as PG password).
HOLOGRES_SCHEMApublicSchema name.
HOLOGRES_TOKENIZERjiebaTokenizer for text fields.
HOLOGRES_DISTANCE_METHODCosineDistance method.
HOLOGRES_BASE_QUANTIZATION_TYPErabitqQuantization type.
HOLOGRES_MAX_DEGREE64HNSW max degree.
HOLOGRES_EF_CONSTRUCTION400HNSW ef_construction parameter.
VariableDefaultDescription
PGVECTOR_HOSTpgvectorHostname.
PGVECTOR_PORT5432Port number.
PGVECTOR_USERpostgresUsername.
PGVECTOR_PASSWORDdifyai123456Password.
PGVECTOR_DATABASEdifyDatabase name.
PGVECTOR_MIN_CONNECTION1Minimum pool connections.
PGVECTOR_MAX_CONNECTION5Maximum pool connections.
PGVECTOR_PG_BIGMfalseEnable pg_bigm extension for full-text search.
VariableDefaultDescription
VASTBASE_HOSTvastbaseHostname.
VASTBASE_PORT5432Port number.
VASTBASE_USERdifyUsername.
VASTBASE_PASSWORDDifyai123456Password.
VASTBASE_DATABASEdifyDatabase name.
VASTBASE_MIN_CONNECTION1Minimum pool connections.
VASTBASE_MAX_CONNECTION5Maximum pool connections.
VariableDefaultDescription
PGVECTO_RS_HOSTpgvecto-rsHostname.
PGVECTO_RS_PORT5432Port number.
PGVECTO_RS_USERpostgresUsername.
PGVECTO_RS_PASSWORDdifyai123456Password.
PGVECTO_RS_DATABASEdifyDatabase name.
VariableDefaultDescription
ANALYTICDB_KEY_ID(empty)Aliyun access key ID. Create AccessKey.
ANALYTICDB_KEY_SECRET(empty)Aliyun access key secret.
ANALYTICDB_REGION_IDcn-hangzhouRegion identifier.
ANALYTICDB_INSTANCE_ID(empty)Instance ID, e.g., gp-xxxxxx. Create instance.
ANALYTICDB_ACCOUNT(empty)Account name. Create account.
ANALYTICDB_PASSWORD(empty)Account password.
ANALYTICDB_NAMESPACEdifyNamespace (schema). Created automatically if not exists.
ANALYTICDB_NAMESPACE_PASSWORD(empty)Namespace password. Used when creating a new namespace.
ANALYTICDB_HOST(empty)Direct connection host (alternative to API-based access).
ANALYTICDB_PORT5432Direct connection port.
ANALYTICDB_MIN_CONNECTION1Minimum pool connections.
ANALYTICDB_MAX_CONNECTION5Maximum pool connections.
VariableDefaultDescription
TIDB_VECTOR_HOSTtidbHostname.
TIDB_VECTOR_PORT4000Port number.
TIDB_VECTOR_USER(empty)Username.
TIDB_VECTOR_PASSWORD(empty)Password.
TIDB_VECTOR_DATABASEdifyDatabase name.
VariableDefaultDescription
MATRIXONE_HOSTmatrixoneHostname.
MATRIXONE_PORT6001Port number.
MATRIXONE_USERdumpUsername.
MATRIXONE_PASSWORD111Password.
MATRIXONE_DATABASEdifyDatabase name.
VariableDefaultDescription
CHROMA_HOST127.0.0.1Chroma server host.
CHROMA_PORT8000Chroma server port.
CHROMA_TENANTdefault_tenantTenant name.
CHROMA_DATABASEdefault_databaseDatabase name.
CHROMA_AUTH_PROVIDERchromadb.auth.token_authn.TokenAuthClientProviderAuth provider class.
CHROMA_AUTH_CREDENTIALS(empty)Auth credentials.
VariableDefaultDescription
ORACLE_USERdifyOracle username.
ORACLE_PASSWORDdifyOracle password.
ORACLE_DSNoracle:1521/FREEPDB1Data source name.
ORACLE_CONFIG_DIR/app/api/storage/walletOracle configuration directory.
ORACLE_WALLET_LOCATION/app/api/storage/walletWallet location for Autonomous DB.
ORACLE_WALLET_PASSWORDdifyWallet password.
ORACLE_IS_AUTONOMOUSfalseWhether using Oracle Autonomous Database.
VariableDefaultDescription
ALIBABACLOUD_MYSQL_HOST127.0.0.1Hostname.
ALIBABACLOUD_MYSQL_PORT3306Port number.
ALIBABACLOUD_MYSQL_USERrootUsername.
ALIBABACLOUD_MYSQL_PASSWORDdifyai123456Password.
ALIBABACLOUD_MYSQL_DATABASEdifyDatabase name.
ALIBABACLOUD_MYSQL_MAX_CONNECTION5Maximum pool connections.
ALIBABACLOUD_MYSQL_HNSW_M6HNSW M parameter.
VariableDefaultDescription
RELYT_HOSTdbHostname.
RELYT_PORT5432Port number.
RELYT_USERpostgresUsername.
RELYT_PASSWORDdifyai123456Password.
RELYT_DATABASEpostgresDatabase name.
VariableDefaultDescription
OPENSEARCH_HOSTopensearchHostname.
OPENSEARCH_PORT9200Port number.
OPENSEARCH_SECUREtrueUse HTTPS.
OPENSEARCH_VERIFY_CERTStrueVerify SSL certificates.
OPENSEARCH_AUTH_METHODbasicbasic uses username/password. aws_managed_iam uses AWS SigV4 request signing via Boto3 credentials (for AWS Managed OpenSearch or Serverless).
OPENSEARCH_USERadminUsername. Only used with basic auth.
OPENSEARCH_PASSWORDadminPassword. Only used with basic auth.
OPENSEARCH_AWS_REGIONap-southeast-1AWS region. Only used with aws_managed_iam auth.
OPENSEARCH_AWS_SERVICEaossAWS service type: es (Managed Cluster) or aoss (OpenSearch Serverless). Only used with aws_managed_iam auth.
VariableDefaultDescription
TENCENT_VECTOR_DB_URLhttp://127.0.0.1Access address. Console.
TENCENT_VECTOR_DB_API_KEYdifyAPI key. Key Management.
TENCENT_VECTOR_DB_TIMEOUT30Request timeout in seconds.
TENCENT_VECTOR_DB_USERNAMEdifyAccount name. Account Management.
TENCENT_VECTOR_DB_DATABASEdifyDatabase name. Create Database.
TENCENT_VECTOR_DB_SHARD1Number of shards.
TENCENT_VECTOR_DB_REPLICAS2Number of replicas.
TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCHfalseEnable hybrid search. Sparse Vector docs.
VariableDefaultDescription
ELASTICSEARCH_HOST0.0.0.0Hostname.
ELASTICSEARCH_PORT9200Port number.
ELASTICSEARCH_USERNAMEelasticUsername.
ELASTICSEARCH_PASSWORDelasticPassword.
ELASTICSEARCH_USE_CLOUDfalseSwitch to Elastic Cloud mode. When true, uses ELASTICSEARCH_CLOUD_URL and ELASTICSEARCH_API_KEY instead of host/port/username/password.
ELASTICSEARCH_CLOUD_URL(empty)Elastic Cloud endpoint URL. Required when ELASTICSEARCH_USE_CLOUD=true.
ELASTICSEARCH_API_KEY(empty)Elastic Cloud API key. Required when ELASTICSEARCH_USE_CLOUD=true.
ELASTICSEARCH_VERIFY_CERTSfalseVerify SSL certificates.
ELASTICSEARCH_CA_CERTS(empty)Path to CA certificates.
ELASTICSEARCH_REQUEST_TIMEOUT100000Request timeout in milliseconds.
ELASTICSEARCH_RETRY_ON_TIMEOUTtrueRetry on timeout.
ELASTICSEARCH_MAX_RETRIES10Maximum retry attempts.
VariableDefaultDescription
BAIDU_VECTOR_DB_ENDPOINThttp://127.0.0.1:5287Endpoint URL.
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS30000Connection timeout in milliseconds.
BAIDU_VECTOR_DB_ACCOUNTrootAccount name.
BAIDU_VECTOR_DB_API_KEYdifyAPI key.
BAIDU_VECTOR_DB_DATABASEdifyDatabase name.
BAIDU_VECTOR_DB_SHARD1Number of shards.
BAIDU_VECTOR_DB_REPLICAS3Number of replicas.
BAIDU_VECTOR_DB_INVERTED_INDEX_ANALYZERDEFAULT_ANALYZERInverted index analyzer.
BAIDU_VECTOR_DB_INVERTED_INDEX_PARSER_MODECOARSE_MODEInverted index parser mode.
VariableDefaultDescription
VIKINGDB_ACCESS_KEY(empty)Access key.
VIKINGDB_SECRET_KEY(empty)Secret key.
VIKINGDB_REGIONcn-shanghaiRegion.
VIKINGDB_HOSTapi-vikingdb.xxx.volces.comAPI host. Replace with your region-specific endpoint.
VIKINGDB_SCHEMAhttpProtocol scheme (http or https).
VIKINGDB_CONNECTION_TIMEOUT30Connection timeout in seconds.
VIKINGDB_SOCKET_TIMEOUT30Socket timeout in seconds.
VariableDefaultDescription
LINDORM_URLhttp://localhost:30070Lindorm search engine URL. Console.
LINDORM_USERNAMEadminUsername.
LINDORM_PASSWORDadminPassword.
LINDORM_USING_UGCtrueUse UGC mode.
LINDORM_QUERY_TIMEOUT1Query timeout in seconds.
VariableDefaultDescription
OPENGAUSS_HOSTopengaussHostname.
OPENGAUSS_PORT6600Port number.
OPENGAUSS_USERpostgresUsername.
OPENGAUSS_PASSWORDDify@123Password.
OPENGAUSS_DATABASEdifyDatabase name.
OPENGAUSS_MIN_CONNECTION1Minimum pool connections.
OPENGAUSS_MAX_CONNECTION5Maximum pool connections.
OPENGAUSS_ENABLE_PQfalseEnable PQ acceleration.
VariableDefaultDescription
UPSTASH_VECTOR_URL(empty)Upstash Vector endpoint URL.
UPSTASH_VECTOR_TOKEN(empty)Upstash Vector API token.
VariableDefaultDescription
TABLESTORE_ENDPOINThttps://instance-name.cn-hangzhou.ots.aliyuncs.comEndpoint address. Replace instance-name with your instance.
TABLESTORE_INSTANCE_NAME(empty)Instance name.
TABLESTORE_ACCESS_KEY_ID(empty)Access key ID.
TABLESTORE_ACCESS_KEY_SECRET(empty)Access key secret.
TABLESTORE_NORMALIZE_FULLTEXT_BM25_SCOREfalseNormalize fulltext BM25 scores.
VariableDefaultDescription
CLICKZETTA_USERNAME(empty)Username.
CLICKZETTA_PASSWORD(empty)Password.
CLICKZETTA_INSTANCE(empty)Instance name.
CLICKZETTA_SERVICEapi.clickzetta.comService endpoint.
CLICKZETTA_WORKSPACEquick_startWorkspace name.
CLICKZETTA_VCLUSTERdefault_apVirtual cluster.
CLICKZETTA_SCHEMAdifySchema name.
CLICKZETTA_BATCH_SIZE100Batch size for operations.
CLICKZETTA_ENABLE_INVERTED_INDEXtrueEnable inverted index.
CLICKZETTA_ANALYZER_TYPEchineseAnalyzer type.
CLICKZETTA_ANALYZER_MODEsmartAnalyzer mode.
CLICKZETTA_VECTOR_DISTANCE_FUNCTIONcosine_distanceDistance function.
VariableDefaultDescription
IRIS_HOSTirisHostname.
IRIS_SUPER_SERVER_PORT1972Super server port.
IRIS_USER_SYSTEMUsername.
IRIS_PASSWORDDify@1234Password.
IRIS_DATABASEUSERDatabase name.
IRIS_SCHEMAdifySchema name.
IRIS_CONNECTION_URL(empty)Full connection URL (overrides individual settings).
IRIS_MIN_CONNECTION1Minimum pool connections.
IRIS_MAX_CONNECTION3Maximum pool connections.
IRIS_TEXT_INDEXtrueEnable text indexing.
IRIS_TEXT_INDEX_LANGUAGEenText index language.

Knowledge Configuration

VariableDefaultDescription
UPLOAD_FILE_SIZE_LIMIT15Maximum file size in MB for document uploads (PDFs, Word docs, etc.). Users see a “file too large” error when exceeded. Does not apply to images, videos, or audio—they have separate limits below.
UPLOAD_FILE_BATCH_LIMIT5Maximum number of files the frontend allows per upload batch.
UPLOAD_FILE_EXTENSION_BLACKLIST(empty)Security blocklist of file extensions that cannot be uploaded. Comma-separated, lowercase, no dots. Example: exe,bat,cmd,com,scr,vbs,ps1,msi,dll. Empty allows all types.
SINGLE_CHUNK_ATTACHMENT_LIMIT10Maximum number of images that can be embedded in a single knowledge base segment (chunk).
IMAGE_FILE_BATCH_LIMIT10Maximum number of image files per upload batch.
ATTACHMENT_IMAGE_FILE_SIZE_LIMIT2Maximum size in MB for images fetched from external URLs during knowledge base indexing. Images larger than this are skipped. Different from UPLOAD_IMAGE_FILE_SIZE_LIMIT which applies to direct uploads.
ATTACHMENT_IMAGE_DOWNLOAD_TIMEOUT60Timeout in seconds when downloading images from external URLs during knowledge base indexing. Slow or unresponsive image servers are abandoned after this timeout.
ETL_TYPEdifyDocument extraction library. dify supports txt, md, pdf, html, xlsx, docx, csv. Unstructured adds support for doc, msg, eml, ppt, pptx, xml, epub (requires UNSTRUCTURED_API_URL).
UNSTRUCTURED_API_URL(empty)Unstructured.io API endpoint. Required when ETL_TYPE is Unstructured. Also needed for .ppt file support. Example: http://unstructured:8000/general/v0/general.
UNSTRUCTURED_API_KEY(empty)API key for Unstructured.io authentication.
SCARF_NO_ANALYTICStrueDisable Unstructured library’s telemetry/analytics collection.
TOP_K_MAX_VALUE10Maximum value users can set for the top_k parameter in knowledge base retrieval (how many results to return per search).
DATASET_MAX_SEGMENTS_PER_REQUEST0Maximum number of segments per dataset API request. 0 means unlimited.

Annotation Import

VariableDefaultDescription
ANNOTATION_IMPORT_FILE_SIZE_LIMIT2Maximum CSV file size in MB for annotation import. Returns HTTP 413 when exceeded.
ANNOTATION_IMPORT_MAX_RECORDS10000Maximum number of records per annotation import. Files with more records must be split into batches.
ANNOTATION_IMPORT_MIN_RECORDS1Minimum number of valid records required per annotation import.
ANNOTATION_IMPORT_RATE_LIMIT_PER_MINUTE5Maximum annotation import requests per minute per workspace. Returns HTTP 429 when exceeded.
ANNOTATION_IMPORT_RATE_LIMIT_PER_HOUR20Maximum annotation import requests per hour per workspace.
ANNOTATION_IMPORT_MAX_CONCURRENT5Maximum concurrent annotation import tasks per workspace. Stale tasks are auto-cleaned after 2 minutes.

Model Configuration

VariableDefaultDescription
PROMPT_GENERATION_MAX_TOKENS512Maximum tokens when the system auto-generates a prompt using an LLM. Prevents runaway generations that waste API quota.
CODE_GENERATION_MAX_TOKENS1024Maximum tokens when the system auto-generates code using an LLM.
PLUGIN_BASED_TOKEN_COUNTING_ENABLEDfalseUse plugin-based token counting for accurate usage tracking. When disabled, token counting returns 0 (faster but cost tracking is less accurate).

Multi-modal Configuration

VariableDefaultDescription
MULTIMODAL_SEND_FORMATbase64How files are sent to multi-modal LLMs. base64 embeds file data in the request (more compatible, works offline, larger payloads). url sends a signed URL for the model to fetch (faster, smaller requests, but the model must be able to reach FILES_URL).
UPLOAD_IMAGE_FILE_SIZE_LIMIT10Maximum image file size in MB for direct uploads (jpg, png, webp, gif, svg).
UPLOAD_VIDEO_FILE_SIZE_LIMIT100Maximum video file size in MB for direct uploads (mp4, mov, mpeg, webm).
UPLOAD_AUDIO_FILE_SIZE_LIMIT50Maximum audio file size in MB for direct uploads (mp3, m4a, wav, amr, mpga).
All upload size limits are also gated by NGINX_CLIENT_MAX_BODY_SIZE (default 100M). If you increase any upload limit above 100 MB, also increase NGINX_CLIENT_MAX_BODY_SIZE to match—otherwise Nginx rejects the upload with a 413 error.

Sentry Configuration

Sentry provides error tracking and performance monitoring. Each service has its own DSN to separate error reporting.
VariableDefaultDescription
SENTRY_DSN(empty)Sentry DSN shared across services.
API_SENTRY_DSN(empty)Sentry DSN for the API service. Overrides SENTRY_DSN if set. Empty disables Sentry for the backend.
API_SENTRY_TRACES_SAMPLE_RATE1.0Fraction of requests to include in performance tracing (0.01 = 1%, 1.0 = 100%). Traces track request flow across services.
API_SENTRY_PROFILES_SAMPLE_RATE1.0Fraction of requests to include in CPU/memory profiling (0.01 = 1%). Profiles show where time is spent in code.
WEB_SENTRY_DSN(empty)Sentry DSN for the web frontend (Next.js). Frontend-only.
PLUGIN_SENTRY_ENABLEDfalseEnable Sentry for the plugin daemon service.
PLUGIN_SENTRY_DSN(empty)Sentry DSN for the plugin daemon.

Notion Integration Configuration

Connect Dify to Notion as a knowledge base data source. Get integration credentials at https://www.notion.so/my-integrations.
VariableDefaultDescription
NOTION_INTEGRATION_TYPEpublicpublic uses standard OAuth 2.0 (requires HTTPS redirect URL, needs CLIENT_ID + CLIENT_SECRET). internal uses a direct integration token (works with HTTP). Use internal for local deployments.
NOTION_CLIENT_SECRET(empty)OAuth client secret. Required for public integration.
NOTION_CLIENT_ID(empty)OAuth client ID. Required for public integration.
NOTION_INTERNAL_SECRET(empty)Direct integration token from Notion. Required for internal integration.

Mail Configuration

Dify sends emails for account invitations, password resets, login codes, and Human Input node notifications. Configure one of the three supported providers. Email links require CONSOLE_WEB_URL to be set—see Common Variables.
VariableDefaultDescription
MAIL_TYPEresendMail provider: resend, smtp, or sendgrid.
MAIL_DEFAULT_SEND_FROM(empty)Default “From” address for all outgoing emails. Required.
VariableDefaultDescription
RESEND_API_URLhttps://api.resend.comResend API endpoint. Override for self-hosted Resend or proxy.
RESEND_API_KEY(empty)Resend API key. Required when MAIL_TYPE=resend.
Three TLS modes: implicit TLS (SMTP_USE_TLS=true, SMTP_OPPORTUNISTIC_TLS=false, port 465), STARTTLS (SMTP_USE_TLS=true, SMTP_OPPORTUNISTIC_TLS=true, port 587), or plain (SMTP_USE_TLS=false, port 25).
VariableDefaultDescription
SMTP_SERVER(empty)SMTP server address.
SMTP_PORT465SMTP server port. Use 587 for STARTTLS mode.
SMTP_USERNAME(empty)SMTP username. Can be empty for IP-whitelisted servers.
SMTP_PASSWORD(empty)SMTP password. Can be empty for IP-whitelisted servers.
SMTP_USE_TLStrueEnable TLS. When true with SMTP_OPPORTUNISTIC_TLS=false, uses implicit TLS (SMTP_SSL).
SMTP_OPPORTUNISTIC_TLSfalseUse STARTTLS (explicit TLS) instead of implicit TLS. Must be used with SMTP_USE_TLS=true.
SMTP_LOCAL_HOSTNAME(empty)Override the hostname sent in SMTP HELO/EHLO. Required in Docker when your SMTP server rejects container hostnames (common with Google Workspace, Microsoft 365). Set to your domain, e.g., mail.yourdomain.com.
VariableDefaultDescription
SENDGRID_API_KEY(empty)SendGrid API key. Required when MAIL_TYPE=sendgrid.
For more details, see the SendGrid documentation.

Others Configuration

Indexing

VariableDefaultDescription
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH4000Maximum token length per text segment when chunking documents for the knowledge base. Larger values retain more context per chunk; smaller values provide finer granularity.

Token & Invitation

All token expiry variables control how long a one-time-use token stored in Redis remains valid. After expiry, the user must request a new token.
VariableDefaultDescription
INVITE_EXPIRY_HOURS72How long a workspace invitation link stays valid (in hours).
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES5Password reset token validity in minutes.
EMAIL_REGISTER_TOKEN_EXPIRY_MINUTES5Email registration token validity in minutes.
CHANGE_EMAIL_TOKEN_EXPIRY_MINUTES5Change email token validity in minutes.
OWNER_TRANSFER_TOKEN_EXPIRY_MINUTES5Workspace owner transfer token validity in minutes.

Code Execution Sandbox

The sandbox is a separate service that runs Python, JavaScript, and Jinja2 code nodes in isolation.
VariableDefaultDescription
CODE_EXECUTION_ENDPOINThttp://sandbox:8194Sandbox service endpoint.
CODE_EXECUTION_API_KEYdify-sandboxAPI key for sandbox authentication. Must match SANDBOX_API_KEY in the sandbox service.
CODE_EXECUTION_SSL_VERIFYtrueVerify SSL for sandbox connections. Disable for development with self-signed certificates.
CODE_EXECUTION_CONNECT_TIMEOUT10Connection timeout in seconds.
CODE_EXECUTION_READ_TIMEOUT60Read timeout in seconds.
CODE_EXECUTION_WRITE_TIMEOUT10Write timeout in seconds.
CODE_EXECUTION_POOL_MAX_CONNECTIONS100Maximum concurrent HTTP connections to the sandbox service.
CODE_EXECUTION_POOL_MAX_KEEPALIVE_CONNECTIONS20Maximum idle connections kept alive in the sandbox connection pool.
CODE_EXECUTION_POOL_KEEPALIVE_EXPIRY5.0Seconds before idle sandbox connections are closed.
CODE_MAX_NUMBER9223372036854775807Maximum numeric value allowed in code node output (max 64-bit signed integer).
CODE_MIN_NUMBER-9223372036854775808Minimum numeric value allowed in code node output (min 64-bit signed integer).
CODE_MAX_STRING_LENGTH400000Maximum string length in code node output. Prevents memory exhaustion from unbounded string generation.
CODE_MAX_DEPTH5Maximum nesting depth for output data structures.
CODE_MAX_PRECISION20Maximum decimal places for floating-point numbers in output.
CODE_MAX_STRING_ARRAY_LENGTH30Maximum number of elements in a string array output.
CODE_MAX_OBJECT_ARRAY_LENGTH30Maximum number of elements in an object array output.
CODE_MAX_NUMBER_ARRAY_LENGTH1000Maximum number of elements in a number array output.
TEMPLATE_TRANSFORM_MAX_LENGTH400000Maximum character length for Template Transform node output.

Workflow Runtime

VariableDefaultDescription
WORKFLOW_MAX_EXECUTION_STEPS500Maximum number of node executions per workflow run. Exceeding this terminates the workflow.
WORKFLOW_MAX_EXECUTION_TIME1200Maximum wall-clock time in seconds per workflow run. Exceeding this terminates the workflow.
WORKFLOW_CALL_MAX_DEPTH5Maximum depth for nested workflow-calls-workflow. Prevents infinite recursion.
MAX_VARIABLE_SIZE204800Maximum size in bytes (200 KB) for a single workflow variable.
WORKFLOW_FILE_UPLOAD_LIMIT10Maximum number of files that can be uploaded in a single workflow execution.
WORKFLOW_NODE_EXECUTION_STORAGErdbmsWhere workflow node execution records are stored. rdbms stores everything in the database. hybrid stores new data in object storage and reads from both.
DSL_EXPORT_ENCRYPT_DATASET_IDtrueEncrypt dataset IDs when exporting DSL files. Set to false to export plain IDs for easier cross-environment import.

Workflow Storage Repository

These select which backend implementation handles workflow execution data. The default SQLAlchemy repositories store everything in the database. Alternative implementations (e.g., Celery, Logstore) can be used for different storage strategies.
VariableDefaultDescription
CORE_WORKFLOW_EXECUTION_REPOSITORYcore.repositories.sqlalchemy_workflow_execution_repository.SQLAlchemyWorkflowExecutionRepositoryRepository implementation for workflow execution records.
CORE_WORKFLOW_NODE_EXECUTION_REPOSITORYcore.repositories.sqlalchemy_workflow_node_execution_repository.SQLAlchemyWorkflowNodeExecutionRepositoryRepository implementation for workflow node execution records.
API_WORKFLOW_RUN_REPOSITORYrepositories.sqlalchemy_api_workflow_run_repository.DifyAPISQLAlchemyWorkflowRunRepositoryService-layer repository for workflow run API operations.
API_WORKFLOW_NODE_EXECUTION_REPOSITORYrepositories.sqlalchemy_api_workflow_node_execution_repository.DifyAPISQLAlchemyWorkflowNodeExecutionRepositoryService-layer repository for workflow node execution API operations.
LOOP_NODE_MAX_COUNT100Maximum iterations for Loop nodes. Prevents infinite loops.
MAX_PARALLEL_LIMIT10Maximum number of parallel branches in a workflow.

GraphEngine Worker Pool

VariableDefaultDescription
GRAPH_ENGINE_MIN_WORKERS1Minimum workers per GraphEngine instance.
GRAPH_ENGINE_MAX_WORKERS10Maximum workers per GraphEngine instance.
GRAPH_ENGINE_SCALE_UP_THRESHOLD3Queue depth that triggers spawning additional workers.
GRAPH_ENGINE_SCALE_DOWN_IDLE_TIME5.0Seconds of idle time before excess workers are removed.

Workflow Log Cleanup

VariableDefaultDescription
WORKFLOW_LOG_CLEANUP_ENABLEDfalseEnable automatic cleanup of workflow execution logs at 2:00 AM daily.
WORKFLOW_LOG_RETENTION_DAYS30Number of days to retain workflow logs before cleanup.
WORKFLOW_LOG_CLEANUP_BATCH_SIZE100Number of log entries processed per cleanup batch. Adjust based on system performance.
WORKFLOW_LOG_CLEANUP_SPECIFIC_WORKFLOW_IDS(empty)Comma-separated list of workflow IDs to limit cleanup to. When empty, all workflow logs are cleaned.

HTTP Request Node

These configure the HTTP Request node used in workflows to call external APIs.
VariableDefaultDescription
HTTP_REQUEST_NODE_MAX_TEXT_SIZE1048576Maximum text response size in bytes (1 MB). Responses larger than this are truncated.
HTTP_REQUEST_NODE_MAX_BINARY_SIZE10485760Maximum binary response size in bytes (10 MB).
HTTP_REQUEST_NODE_SSL_VERIFYtrueVerify SSL certificates. Disable for testing with self-signed certificates.
HTTP_REQUEST_MAX_CONNECT_TIMEOUT10Maximum connect timeout users can set in the workflow editor (in seconds). Per-node timeouts cannot exceed this.
HTTP_REQUEST_MAX_READ_TIMEOUT600Maximum read timeout ceiling (in seconds).
HTTP_REQUEST_MAX_WRITE_TIMEOUT600Maximum write timeout ceiling (in seconds).

Webhook

VariableDefaultDescription
WEBHOOK_REQUEST_BODY_MAX_SIZE10485760Maximum webhook payload size in bytes (10 MB). Larger payloads are rejected with a 413 error.

SSRF Protection

All outbound HTTP requests from Dify (HTTP nodes, image downloads, etc.) are routed through a proxy that blocks requests to internal/private IP ranges, preventing Server-Side Request Forgery (SSRF) attacks.
VariableDefaultDescription
SSRF_PROXY_HTTP_URLhttp://ssrf_proxy:3128SSRF proxy URL for HTTP requests.
SSRF_PROXY_HTTPS_URLhttp://ssrf_proxy:3128SSRF proxy URL for HTTPS requests.
SSRF_POOL_MAX_CONNECTIONS100Maximum concurrent connections in the SSRF HTTP client pool.
SSRF_POOL_MAX_KEEPALIVE_CONNECTIONS20Maximum idle connections kept alive in the SSRF pool.
SSRF_POOL_KEEPALIVE_EXPIRY5.0Seconds before idle SSRF connections are closed.
RESPECT_XFORWARD_HEADERS_ENABLEDfalseTrust X-Forwarded-For/Proto/Port headers from reverse proxies. Only enable behind a single trusted reverse proxy—otherwise allows IP spoofing.

Agent Configuration

VariableDefaultDescription
MAX_TOOLS_NUM10Maximum number of tools an agent can use simultaneously.
MAX_ITERATIONS_NUM99Maximum reasoning iterations per agent execution. Prevents infinite agent loops.

Web Frontend Service

These variables are used by the Next.js web frontend container only—they do not affect the Python backend.
VariableDefaultDescription
TEXT_GENERATION_TIMEOUT_MS60000Frontend timeout for streaming text generation UI. If a stream stalls for longer than this, the UI pauses rendering.
ALLOW_UNSAFE_DATA_SCHEMEfalseAllow rendering URLs with the data: scheme. Disabled by default for security.
MAX_TREE_DEPTH50Maximum node tree depth in the workflow editor UI.

Database Service

These configure the database containers directly in Docker Compose.
VariableDefaultDescription
PGDATA/var/lib/postgresql/data/pgdataPostgreSQL data directory inside the container.
MYSQL_HOST_VOLUME./volumes/mysql/dataHost path mounted as MySQL data volume.

Sandbox Service

The sandbox is an isolated service for executing code nodes (Python, JavaScript, Jinja2). Network access can be disabled for security.
VariableDefaultDescription
SANDBOX_API_KEYdify-sandboxAPI key for sandbox authentication. Must match CODE_EXECUTION_API_KEY in the API service.
SANDBOX_GIN_MODEreleaseSandbox service mode: release or debug.
SANDBOX_WORKER_TIMEOUT15Maximum execution time in seconds for a single code run.
SANDBOX_ENABLE_NETWORKtrueAllow code to make outbound HTTP requests. Disable to prevent code nodes from accessing external services.
SANDBOX_HTTP_PROXYhttp://ssrf_proxy:3128HTTP proxy for SSRF protection when network is enabled.
SANDBOX_HTTPS_PROXYhttp://ssrf_proxy:3128HTTPS proxy for SSRF protection.
SANDBOX_PORT8194Sandbox service port.

Nginx Reverse Proxy

VariableDefaultDescription
NGINX_SERVER_NAME_Nginx server name. _ matches any hostname.
NGINX_HTTPS_ENABLEDfalseEnable HTTPS. When true, place your SSL certificate and key in ./nginx/ssl/.
NGINX_PORT80HTTP port.
NGINX_SSL_PORT443HTTPS port (only used when NGINX_HTTPS_ENABLED=true).
NGINX_SSL_CERT_FILENAMEdify.crtSSL certificate filename in ./nginx/ssl/.
NGINX_SSL_CERT_KEY_FILENAMEdify.keySSL private key filename in ./nginx/ssl/.
NGINX_SSL_PROTOCOLSTLSv1.2 TLSv1.3Allowed TLS protocol versions.
NGINX_WORKER_PROCESSESautoNumber of Nginx worker processes. auto matches CPU core count.
NGINX_CLIENT_MAX_BODY_SIZE100MMaximum request body size. Affects file upload limits at the proxy level.
NGINX_KEEPALIVE_TIMEOUT65Keepalive timeout in seconds.
NGINX_PROXY_READ_TIMEOUT3600sProxy read timeout. Set high (1 hour) to support long-running SSE streams.
NGINX_PROXY_SEND_TIMEOUT3600sProxy send timeout.
NGINX_ENABLE_CERTBOT_CHALLENGEfalseAccept Let’s Encrypt ACME challenge requests at /.well-known/acme-challenge/. Enable for automated certificate renewal.
After enabling HTTPS, also update the URL variables in Common Variables (e.g., CONSOLE_API_URL, CONSOLE_WEB_URL) to use https://.

Certbot Configuration

VariableDefaultDescription
CERTBOT_EMAIL(empty)Email address required by Let’s Encrypt for certificate notifications.
CERTBOT_DOMAIN(empty)Domain name for the SSL certificate.
CERTBOT_OPTIONS(empty)Additional certbot CLI options (e.g., --force-renewal, --dry-run).

SSRF Proxy

These configure the Squid-based SSRF proxy container that blocks requests to internal/private networks.
VariableDefaultDescription
SSRF_HTTP_PORT3128Proxy listening port.
SSRF_COREDUMP_DIR/var/spool/squidCore dump directory.
SSRF_REVERSE_PROXY_PORT8194Reverse proxy port forwarded to the sandbox service.
SSRF_SANDBOX_HOSTsandboxHostname of the sandbox service.
SSRF_DEFAULT_TIME_OUT5Default overall timeout in seconds for proxied requests.
SSRF_DEFAULT_CONNECT_TIME_OUT5Default connection timeout in seconds.
SSRF_DEFAULT_READ_TIME_OUT5Default read timeout in seconds.
SSRF_DEFAULT_WRITE_TIME_OUT5Default write timeout in seconds.

Docker Compose

VariableDefaultDescription
COMPOSE_PROFILES${VECTOR_STORE:-weaviate},${DB_TYPE:-postgresql}Automatically selects which service containers to start based on your database and vector store choices. For example, setting DB_TYPE=mysql starts MySQL instead of PostgreSQL.
EXPOSE_NGINX_PORT80Host port mapped to Nginx HTTP.
EXPOSE_NGINX_SSL_PORT443Host port mapped to Nginx HTTPS.

ModelProvider & Tool Position Configuration

Customize which tools and model providers are available in the app interface and their display order. Use comma-separated values with no spaces between items.
VariableDefaultDescription
POSITION_TOOL_PINS(empty)Pin specific tools to the top of the list. Example: bing,google.
POSITION_TOOL_INCLUDES(empty)Only show listed tools. If unset, all tools are available.
POSITION_TOOL_EXCLUDES(empty)Hide specific tools (pinned tools are not affected).
POSITION_PROVIDER_PINS(empty)Pin specific model providers to the top. Example: openai,anthropic.
POSITION_PROVIDER_INCLUDES(empty)Only show listed providers. If unset, all providers are available.
POSITION_PROVIDER_EXCLUDES(empty)Hide specific providers (pinned providers are not affected).

Plugin Daemon Configuration

The plugin daemon is a separate service that manages plugin lifecycle (installation, execution, upgrades). The API communicates with it via HTTP.
VariableDefaultDescription
PLUGIN_DAEMON_URLhttp://plugin_daemon:5002Plugin daemon service URL.
PLUGIN_DAEMON_KEY(auto-generated)Authentication key for the plugin daemon.
PLUGIN_DAEMON_PORT5002Plugin daemon listening port.
PLUGIN_DAEMON_TIMEOUT600.0Timeout in seconds for all plugin daemon requests (installation, execution, listing).
PLUGIN_MAX_PACKAGE_SIZE52428800Maximum plugin package size in bytes (50 MB). Validated during marketplace downloads.
PLUGIN_MODEL_SCHEMA_CACHE_TTL3600How long to cache plugin model schemas in seconds. Reduces repeated lookups.
PLUGIN_DIFY_INNER_API_KEY(auto-generated)API key the plugin daemon uses to call back to the Dify API. Must match DIFY_INNER_API_KEY in the plugin daemon service config.
PLUGIN_DIFY_INNER_API_URLhttp://api:5001Internal API URL the plugin daemon calls back to.
PLUGIN_DEBUGGING_HOST0.0.0.0Host for plugin remote debugging connections.
PLUGIN_DEBUGGING_PORT5003Port for plugin remote debugging connections.
MARKETPLACE_ENABLEDtrueEnable the plugin marketplace. When disabled, only locally installed plugins are available—browsing and auto-upgrades are unavailable.
MARKETPLACE_API_URLhttps://marketplace.dify.aiMarketplace API endpoint for plugin browsing, downloading, and upgrade checking.
FORCE_VERIFYING_SIGNATUREtrueRequire valid signatures before installing plugins. Prevents installing tampered or unsigned packages.
PLUGIN_MAX_EXECUTION_TIMEOUT600Plugin execution timeout in seconds (plugin daemon side). Should match PLUGIN_DAEMON_TIMEOUT on the API side.
PIP_MIRROR_URL(empty)Custom PyPI mirror URL used by the plugin daemon when installing plugin dependencies. Useful for faster installs or air-gapped environments.

OTLP / OpenTelemetry Configuration

OpenTelemetry provides distributed tracing and metrics collection. When enabled, Dify instruments Flask and exports telemetry data to an OTLP collector.
VariableDefaultDescription
ENABLE_OTELfalseMaster switch for OpenTelemetry instrumentation.
OTLP_TRACE_ENDPOINT(empty)Dedicated trace endpoint URL. If unset, falls back to {OTLP_BASE_ENDPOINT}/v1/traces.
OTLP_METRIC_ENDPOINT(empty)Dedicated metric endpoint URL. If unset, falls back to {OTLP_BASE_ENDPOINT}/v1/metrics.
OTLP_BASE_ENDPOINThttp://localhost:4318Base OTLP collector URL. Used as fallback when specific trace/metric endpoints are not set.
OTLP_API_KEY(empty)API key for OTLP authentication. Sent as Authorization: Bearer header.
OTEL_EXPORTER_TYPEotlpExporter type. otlp exports to a collector; other values use a console exporter (for debugging).
OTEL_EXPORTER_OTLP_PROTOCOL(empty)Protocol for OTLP export. grpc uses gRPC exporters; anything else uses HTTP.
OTEL_SAMPLING_RATE0.1Fraction of requests to trace (0.1 = 10%). Lower values reduce overhead in high-traffic production environments.
OTEL_BATCH_EXPORT_SCHEDULE_DELAY5000Delay in milliseconds between batch exports.
OTEL_MAX_QUEUE_SIZE2048Maximum number of spans queued before dropping.
OTEL_MAX_EXPORT_BATCH_SIZE512Maximum spans per export batch.
OTEL_METRIC_EXPORT_INTERVAL60000Metric export interval in milliseconds.
OTEL_BATCH_EXPORT_TIMEOUT10000Batch span export timeout in milliseconds.
OTEL_METRIC_EXPORT_TIMEOUT30000Metric export timeout in milliseconds.

Miscellaneous

VariableDefaultDescription
CSP_WHITELIST(empty)Additional domains to allow in Content Security Policy headers.
ALLOW_EMBEDfalseAllow Dify pages to be embedded in iframes. When false, sets X-Frame-Options: DENY to prevent clickjacking.
SWAGGER_UI_ENABLEDfalseExpose Swagger UI at SWAGGER_UI_PATH for browsing API documentation. Swagger endpoints bypass authentication.
SWAGGER_UI_PATH/swagger-ui.htmlURL path for Swagger UI.
MAX_SUBMIT_COUNT100Maximum concurrent task submissions in the thread pool used for parallel workflow node execution.
TENANT_ISOLATED_TASK_CONCURRENCY1Number of document indexing or RAG pipeline tasks processed simultaneously per tenant. Increase for faster indexing with more database load.

Scheduled Tasks Configuration

Dify uses Celery Beat to run background maintenance tasks on configurable schedules.
VariableDefaultDescription
ENABLE_CLEAN_EMBEDDING_CACHE_TASKfalseDelete expired embedding cache records from the database at 2:00 AM daily. Manages database size.
ENABLE_CLEAN_UNUSED_DATASETS_TASKfalseDisable documents in knowledge bases that haven’t had activity within the retention period. Runs at 3:00 AM daily.
ENABLE_CLEAN_MESSAGESfalseDelete conversation messages older than the retention period at 4:00 AM daily.
ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASKfalseEmail workspace owners a list of knowledge bases that had documents auto-disabled by the cleanup task. Runs every Monday at 10:00 AM.
ENABLE_DATASETS_QUEUE_MONITORfalseMonitor the dataset processing queue backlog in Redis. Sends email alerts when the queue exceeds the threshold.
QUEUE_MONITOR_INTERVAL30How often to check the queue (in minutes).
QUEUE_MONITOR_THRESHOLD200Queue size that triggers an alert email.
QUEUE_MONITOR_ALERT_EMAILS(empty)Email addresses to receive queue alerts (comma-separated).
ENABLE_CHECK_UPGRADABLE_PLUGIN_TASKtrueCheck the marketplace for newer plugin versions every 15 minutes. Dispatches upgrade tasks based on each tenant’s auto-upgrade schedule.
ENABLE_WORKFLOW_SCHEDULE_POLLER_TASKtrueEnable the workflow schedule poller that checks for and triggers scheduled workflow runs.
WORKFLOW_SCHEDULE_POLLER_INTERVAL1How often to check for due scheduled workflows (in minutes).
WORKFLOW_SCHEDULE_POLLER_BATCH_SIZE100Maximum number of due schedules fetched per poll cycle.
WORKFLOW_SCHEDULE_MAX_DISPATCH_PER_TICK0Circuit breaker: maximum schedules dispatched per tick. 0 means unlimited.
ENABLE_WORKFLOW_RUN_CLEANUP_TASKfalseEnable automatic cleanup of workflow run records.
ENABLE_CREATE_TIDB_SERVERLESS_TASKfalsePre-create TiDB Serverless clusters for vector database pooling.
ENABLE_UPDATE_TIDB_SERVERLESS_STATUS_TASKfalseUpdate TiDB Serverless cluster status periodically.
ENABLE_HUMAN_INPUT_TIMEOUT_TASKtrueCheck for expired Human Input forms and resume or stop timed-out workflows.
HUMAN_INPUT_TIMEOUT_TASK_INTERVAL1How often to check for expired Human Input forms (in minutes).

Record Retention & Cleanup

These control how old records are cleaned up. When BILLING_ENABLED is active, cleanup targets sandbox-tier tenants with a grace period. When billing is disabled (self-hosted), cleanup applies to all records within the retention window.
VariableDefaultDescription
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS30Records older than this many days are eligible for deletion.
SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD21Grace period in days after subscription expiration before records are deleted (billing-enabled only).
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE1000Number of records processed per cleanup batch.
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_MAX_INTERVAL200Maximum random delay in milliseconds between cleanup batches to reduce database load.
SANDBOX_EXPIRED_RECORDS_CLEAN_TASK_LOCK_TTL90000Redis lock TTL in seconds (~25 hours) to prevent concurrent cleanup task execution.

Aliyun SLS Logstore Configuration

Optional integration with Aliyun Simple Log Service for storing workflow execution logs externally instead of in the database. Enable by setting the repository configuration variables to use logstore implementations.
VariableDefaultDescription
ALIYUN_SLS_ACCESS_KEY_ID(empty)Aliyun access key ID for SLS authentication.
ALIYUN_SLS_ACCESS_KEY_SECRET(empty)Aliyun access key secret for SLS authentication.
ALIYUN_SLS_ENDPOINT(empty)SLS service endpoint URL (e.g., cn-hangzhou.log.aliyuncs.com).
ALIYUN_SLS_REGION(empty)Aliyun region (e.g., cn-hangzhou).
ALIYUN_SLS_PROJECT_NAME(empty)SLS project name for storing workflow logs.
ALIYUN_SLS_LOGSTORE_TTL365Data retention in days for SLS logstores. Use 3650 for permanent storage.
LOGSTORE_DUAL_WRITE_ENABLEDfalseWrite workflow data to both SLS and PostgreSQL simultaneously. Useful during migration to SLS.
LOGSTORE_DUAL_READ_ENABLEDtrueFall back to PostgreSQL when SLS returns no results. Useful during migration when historical data exists only in the database.
LOGSTORE_ENABLE_PUT_GRAPH_FIELDtrueInclude the full workflow graph definition in SLS logs. Set to false to reduce storage by omitting large graph data.

Event Bus Configuration

Redis-based event transport between API and Celery workers.
VariableDefaultDescription
EVENT_BUS_REDIS_URL(empty)Redis connection URL for event streaming. When empty, uses the main Redis connection settings.
EVENT_BUS_REDIS_CHANNEL_TYPEpubsubTransport type: pubsub (Pub/Sub, at-most-once delivery), sharded (sharded Pub/Sub), or streams (Redis Streams, at-least-once delivery).
EVENT_BUS_REDIS_USE_CLUSTERSfalseEnable Redis Cluster mode for event bus. Recommended for large deployments.

Vector Database Service Configuration

These configure the vector database containers themselves (not the Dify client connection). Only the variables for your chosen VECTOR_STORE are relevant.
VariableDefaultDescription
WEAVIATE_PERSISTENCE_DATA_PATH/var/lib/weaviateData persistence directory inside the container.
WEAVIATE_QUERY_DEFAULTS_LIMIT25Default query result limit.
WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLEDtrueAllow anonymous access.
WEAVIATE_DEFAULT_VECTORIZER_MODULEnoneDefault vectorizer module.
WEAVIATE_CLUSTER_HOSTNAMEnode1Cluster node hostname.
WEAVIATE_AUTHENTICATION_APIKEY_ENABLEDtrueEnable API key authentication.
WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS(auto-generated)Allowed API keys. Must match WEAVIATE_API_KEY in the client config.
WEAVIATE_AUTHENTICATION_APIKEY_USERShello@dify.aiUsers associated with API keys.
WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLEDtrueEnable admin list authorization.
WEAVIATE_AUTHORIZATION_ADMINLIST_USERShello@dify.aiAdmin users.
WEAVIATE_DISABLE_TELEMETRYfalseDisable Weaviate telemetry.
WEAVIATE_ENABLE_TOKENIZER_GSEfalseEnable GSE tokenizer (Chinese).
WEAVIATE_ENABLE_TOKENIZER_KAGOME_JAfalseEnable Kagome tokenizer (Japanese).
WEAVIATE_ENABLE_TOKENIZER_KAGOME_KRfalseEnable Kagome tokenizer (Korean).
VariableDefaultDescription
ETCD_AUTO_COMPACTION_MODErevisionETCD auto compaction mode.
ETCD_AUTO_COMPACTION_RETENTION1000Auto compaction retention in number of revisions.
ETCD_QUOTA_BACKEND_BYTES4294967296Backend quota in bytes (4 GB).
ETCD_SNAPSHOT_COUNT50000Number of changes before triggering a snapshot.
ETCD_ENDPOINTSetcd:2379ETCD service endpoints.
MINIO_ACCESS_KEYminioadminMinIO access key.
MINIO_SECRET_KEYminioadminMinIO secret key.
MINIO_ADDRESSminio:9000MinIO service address.
MILVUS_AUTHORIZATION_ENABLEDtrueEnable Milvus security authorization.
VariableDefaultDescription
OPENSEARCH_DISCOVERY_TYPEsingle-nodeDiscovery type for cluster formation.
OPENSEARCH_BOOTSTRAP_MEMORY_LOCKtrueLock memory on startup to prevent swapping.
OPENSEARCH_JAVA_OPTS_MIN512mMinimum JVM heap size.
OPENSEARCH_JAVA_OPTS_MAX1024mMaximum JVM heap size.
OPENSEARCH_INITIAL_ADMIN_PASSWORDQazwsxedc!@#123Initial admin password for the OpenSearch service.
OPENSEARCH_MEMLOCK_SOFT-1Soft memory lock limit (-1 = unlimited).
OPENSEARCH_MEMLOCK_HARD-1Hard memory lock limit (-1 = unlimited).
OPENSEARCH_NOFILE_SOFT65536Soft file descriptor limit.
OPENSEARCH_NOFILE_HARD65536Hard file descriptor limit.
VariableDefaultDescription
PGVECTOR_PGUSERpostgresPostgreSQL user for the PGVector container.
PGVECTOR_POSTGRES_PASSWORD(auto-generated)PostgreSQL password for the PGVector container.
PGVECTOR_POSTGRES_DBdifyDatabase name in the PGVector container.
PGVECTOR_PGDATA/var/lib/postgresql/data/pgdataData directory inside the container.
PGVECTOR_PG_BIGM_VERSION1.2-20240606Version of the pg_bigm extension.
VariableDefaultDescription
ORACLE_PWDDify123456Oracle database password for the container.
ORACLE_CHARACTERSETAL32UTF8Oracle character set.
CHROMA_SERVER_AUTHN_CREDENTIALS(auto-generated)Authentication credentials for the Chroma server container.
CHROMA_SERVER_AUTHN_PROVIDERchromadb.auth.token_authn.TokenAuthenticationServerProviderAuthentication provider for the Chroma server.
CHROMA_IS_PERSISTENTTRUEEnable persistent storage for Chroma.
KIBANA_PORT5601Kibana port (Elasticsearch UI).
VariableDefaultDescription
IRIS_WEB_SERVER_PORT52773IRIS web server management port.
IRIS_TIMEZONEUTCTimezone for the IRIS container.
DB_PLUGIN_DATABASEdify_pluginSeparate database name for plugin data.

Plugin Daemon Storage Configuration

The plugin daemon can store plugin packages in different storage backends. Configure only the provider matching PLUGIN_STORAGE_TYPE.
VariableDefaultDescription
PLUGIN_STORAGE_TYPElocalPlugin storage backend: local, aws_s3, tencent_cos, azure_blob, aliyun_oss, volcengine_tos.
PLUGIN_STORAGE_LOCAL_ROOT/app/storageRoot directory for local plugin storage.
PLUGIN_WORKING_PATH/app/storage/cwdWorking directory for plugin execution.
PLUGIN_INSTALLED_PATHpluginSubdirectory for installed plugins.
PLUGIN_PACKAGE_CACHE_PATHplugin_packagesSubdirectory for cached plugin packages.
PLUGIN_MEDIA_CACHE_PATHassetsSubdirectory for cached media assets.
PLUGIN_STORAGE_OSS_BUCKET(empty)Object storage bucket name (shared across S3/COS/OSS/TOS providers).
PLUGIN_PPROF_ENABLEDfalseEnable Go pprof profiling for the plugin daemon.
PLUGIN_PYTHON_ENV_INIT_TIMEOUT120Timeout in seconds for initializing Python environments for plugins.
PLUGIN_STDIO_BUFFER_SIZE1024Buffer size in bytes for plugin stdio communication.
PLUGIN_STDIO_MAX_BUFFER_SIZE5242880Maximum buffer size in bytes (5 MB) for plugin stdio communication.
ENFORCE_LANGGENIUS_PLUGIN_SIGNATUREStrueEnforce signature verification for LangGenius official plugins.
ENDPOINT_URL_TEMPLATEhttp://localhost/e/{hook_id}URL template for plugin endpoints. {hook_id} is replaced with the actual hook ID.
EXPOSE_PLUGIN_DAEMON_PORT5002Host port mapped to the plugin daemon.
EXPOSE_PLUGIN_DEBUGGING_HOSTlocalhostHost for plugin remote debugging.
EXPOSE_PLUGIN_DEBUGGING_PORT5003Host port for plugin remote debugging.
VariableDefaultDescription
PLUGIN_S3_USE_AWSfalseUse AWS S3 (vs S3-compatible services).
PLUGIN_S3_USE_AWS_MANAGED_IAMfalseUse IAM roles instead of explicit credentials.
PLUGIN_S3_ENDPOINT(empty)S3 endpoint URL.
PLUGIN_S3_USE_PATH_STYLEfalseUse path-style URLs instead of virtual-hosted.
PLUGIN_AWS_ACCESS_KEY(empty)AWS access key.
PLUGIN_AWS_SECRET_KEY(empty)AWS secret key.
PLUGIN_AWS_REGION(empty)AWS region.
VariableDefaultDescription
PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME(empty)Azure Blob container name.
PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING(empty)Azure Blob connection string.
VariableDefaultDescription
PLUGIN_TENCENT_COS_SECRET_KEY(empty)Tencent COS secret key.
PLUGIN_TENCENT_COS_SECRET_ID(empty)Tencent COS secret ID.
PLUGIN_TENCENT_COS_REGION(empty)Tencent COS region.
VariableDefaultDescription
PLUGIN_ALIYUN_OSS_REGION(empty)Aliyun OSS region.
PLUGIN_ALIYUN_OSS_ENDPOINT(empty)Aliyun OSS endpoint.
PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID(empty)Aliyun OSS access key ID.
PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET(empty)Aliyun OSS access key secret.
PLUGIN_ALIYUN_OSS_AUTH_VERSIONv4Aliyun OSS authentication version.
PLUGIN_ALIYUN_OSS_PATH(empty)Aliyun OSS path prefix.
VariableDefaultDescription
PLUGIN_VOLCENGINE_TOS_ENDPOINT(empty)Volcengine TOS endpoint.
PLUGIN_VOLCENGINE_TOS_ACCESS_KEY(empty)Volcengine TOS access key.
PLUGIN_VOLCENGINE_TOS_SECRET_KEY(empty)Volcengine TOS secret key.
PLUGIN_VOLCENGINE_TOS_REGION(empty)Volcengine TOS region.