If you deployed using Docker Compose, you can reset the password with the following command:
Enter the account email and the new password twice.
This error might be due to changing the deployment method or deleting the api/storage/privkeys
directory. This file is used to encrypt the large model keys, so its loss is irreversible. You can reset the encryption key pair with the following commands:
Docker Compose deployment
Source code startup
Navigate to the api
directory
Follow the prompts to reset.
This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Cross-domain and identity issues involve the following configurations:
CONSOLE_CORS_ALLOW_ORIGINS
Console CORS policy, default is *
, meaning all domains can access.
WEB_API_CORS_ALLOW_ORIGINS
WebAPP CORS policy, default is *
, meaning all domains can access.
This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Update the following configuration items in docker-compose.yml
to the new domain:
CONSOLE_API_URL:
Backend URL for the console API.
CONSOLE_WEB_URL:
Frontend URL for the console web.
SERVICE_API_URL:
URL for the service API.
APP_API_URL:
Backend URL for the WebApp API.
APP_WEB_URL:
URL for the WebApp.
For more information, please refer to: Environment Variables
If you started with an image, pull the latest image to complete the upgrade. If you started with source code, pull the latest code and then start it to complete the upgrade.
For source code deployment updates, navigate to the api
directory and run the following command to migrate the database structure to the latest version:
flask db upgrade
Notion Integration Configuration Address. When performing a private deployment, set the following configurations:
NOTION_INTEGRATION_TYPE
: This value should be configured as public/internal. Since Notion’s OAuth redirect address only supports https, use Notion’s internal integration for local deployment.NOTION_CLIENT_SECRET
: Notion OAuth client secret (for public integration type).NOTION_CLIENT_ID
: OAuth client ID (for public integration type).NOTION_INTERNAL_SECRET
: Notion internal integration secret. If the value of NOTION_INTEGRATION_TYPE
is internal, configure this variable.Modify it in the tenants
table of the database.
Find the APP_WEB_URL
configuration domain in docker_compose.yaml
.
Back up the database, configured storage, and vector database data. If deployed using Docker Compose, directly back up all data in the dify/docker/volumes
directory.
127.0.0.1 is the internal address of the container. Dify’s configured server address needs to be the host’s local network IP address.
Refer to the official website Environment Variables Documentation for configuration.
In the local deployment version, invite members via email. After entering the email and sending the invitation, the page will display an invitation link. Copy the invitation link and forward it to the user. The user can open the link, log in via email, set a password, and log in to your space.
Refer to the official website Environment Variables Documentation for configuration, and the related Issue.
If port 80 is occupied, stop the service occupying port 80 or modify the port mapping in docker-compose.yaml
to map port 80 to another port. Typically, Apache and Nginx occupy this port, which can be resolved by stopping these two services.
Since OpenAI TTS implements audio stream segmentation, ffmpeg needs to be installed for source code deployment to work properly. Detailed steps:
Windows:
Ubuntu:
sudo apt-get update
, then sudo apt-get install ffmpeg
.CentOS:
sudo yum install epel-release
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum update
sudo yum install ffmpeg ffmpeg-devel
Mac OS X:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install ffmpeg
Download the complete project, navigate to the docker directory, and execute docker-compose up -d
.
To migrate from Weaviate to another vector database, follow these steps:
For local source code deployment:
.env
fileVECTOR_STORE=qdrant
to migrate to QdrantFor Docker Compose deployment:
docker-compose.yaml
Tested target database:
In the community edition’s docker-compose.yaml
, you might notice some services configured with SSRF_PROXY
and HTTP_PROXY
environment variables, all pointing to an ssrf_proxy
container. This is to prevent SSRF attacks. For more information on SSRF attacks, you can read this article.
To avoid unnecessary risks, we configure a proxy for all services that might cause SSRF attacks and force services like Sandbox to only access external networks through the proxy, ensuring your data and service security. By default, this proxy does not intercept any local requests, but you can customize the proxy behavior by modifying the squid
configuration file.
In docker/volumes/ssrf_proxy/squid.conf
, you can find the squid
configuration file. You can customize the proxy behavior here, such as adding ACL rules to restrict proxy access or adding http_access
rules to restrict proxy access. For example, your local network can access the 192.168.101.0/24
segment, but 192.168.101.19
has sensitive data that you don’t want local deployment Dify users to access, but other IPs can. You can add the following rules in squid.conf
:
This is just a simple example. You can customize the proxy behavior according to your needs. If your business is more complex, such as needing to configure an upstream proxy or cache, you can refer to the squid configuration documentation for more information.
Currently, it is not supported to set your created application as a template. The existing templates are provided by Dify official for cloud version users to refer to. If you are using the cloud version, you can add applications to your workspace or customize them after modification to create your own applications. If you are using the community version and need to create more application templates for your team, you can contact our business team for paid technical support: business@dify.ai
This is because Nginx is forwarding the service to the wrong location. First, ensure the container is running, then run the following command with root privileges:
Find these two lines in the output:
Remember the IP addresses. Then open the directory where you store the Dify source code, open dify/docker/nginx/conf.d
, replace http://api:5001
with http://172.19.0.7:5001
, and replace http://web:3000
with http://172.19.0.5:3000
, then restart the Nginx container or reload the configuration.
These IP addresses are examples, you must execute the command to get your own IP addresses, do not fill them in directly. You might need to reconfigure the IP addresses when restarting the relevant containers.
Find the CSP_WHITELIST
parameter in the .env
configuration file and enter the domain names that you can allow, such as all URLs and API request addresses related to product use.
This behavior helps reduce potential XSS attacks. For more information on CSP recommendations, see Content Security Policy.
The API service port is consistent with the one used by the Dify platform. You can reassign the running port by modifying the nginx
configuration in the docker-compose.yaml
file.
To migrate files from local storage to cloud storage (e.g., Alibaba Cloud OSS), you’ll need to transfer data from the ‘upload_files’ and ‘privkeys’ folders. Follow these steps:
Configure Storage Settings
For local source code deployment:
.env
fileSTORAGE_TYPE=aliyun-oss
For Docker Compose deployment:
docker-compose.yaml
STORAGE_TYPE: aliyun-oss
Execute Migration Commands
For local source code:
For Docker Compose:
Dify does not automatically delete old logs in database or unused files on storage. Instead, several commands are provided for instance administrators to manually delete old logs and unused files.
Deleting old logs
You can delete old logs by specifying the number of days using the clear-free-plan-tenant-expired-logs
command. For example, to delete logs older than 30 days, run the following command:
Gather the tenant ID
618b5d66-a1f5-4b6b-8d12-f171182a1cb2
.Delete old logs by specifying the tenant ID and number of days
--tenant_ids
option to specify the tenant ID.--days
option will be deleted.(Optional) Remove exported free_plan_tenant_expired_logs
directory
flask clear-free-plan-tenant-expired-logs
command first exports any logs marked for deletion to the free_plan_tenant_expired_logs
directory before actually deleting them. If you want to free up storage space, it’s a good idea to delete this directory afterward.free_plan_tenant_expired_logs
directory may vary depending on your storage type. The above command is an example for environments with the default settings.If you’d like to reduce storage usage even further, you might consider reclaiming storage for the database (e.g. VACUUM
for PostgreSQL) as well.
Deleting unused files
You can delete unused files using the clear-orphaned-file-records
command and the remove-orphaned-files-on-storage
command.
Since not all patterns have been fully tested, please note that these command may delete unintended file records or files. Please make sure to back up your database and storage before proceeding. It is also recommended to run this during the maintenance window, as this may cause high load on your instance.
In the current implementation, deleting unused files is only supported when the storage type is OpenDAL (when the environment variable STORAGE_TYPE
is set to opendal
).
If you are using a storage type other than OpenDAL, you will need to manually delete unused files or help implement the scan
method for the storage interface.
If you want to skip the confirmation prompt, you can use the --force
(-f
) option for both commands.
Delete unused file records from the database
Delete files from storage that do not exist in the database
Edit this page | Report an issue
If you deployed using Docker Compose, you can reset the password with the following command:
Enter the account email and the new password twice.
This error might be due to changing the deployment method or deleting the api/storage/privkeys
directory. This file is used to encrypt the large model keys, so its loss is irreversible. You can reset the encryption key pair with the following commands:
Docker Compose deployment
Source code startup
Navigate to the api
directory
Follow the prompts to reset.
This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Cross-domain and identity issues involve the following configurations:
CONSOLE_CORS_ALLOW_ORIGINS
Console CORS policy, default is *
, meaning all domains can access.
WEB_API_CORS_ALLOW_ORIGINS
WebAPP CORS policy, default is *
, meaning all domains can access.
This might be due to switching the domain/URL, causing cross-domain issues between the frontend and backend. Update the following configuration items in docker-compose.yml
to the new domain:
CONSOLE_API_URL:
Backend URL for the console API.
CONSOLE_WEB_URL:
Frontend URL for the console web.
SERVICE_API_URL:
URL for the service API.
APP_API_URL:
Backend URL for the WebApp API.
APP_WEB_URL:
URL for the WebApp.
For more information, please refer to: Environment Variables
If you started with an image, pull the latest image to complete the upgrade. If you started with source code, pull the latest code and then start it to complete the upgrade.
For source code deployment updates, navigate to the api
directory and run the following command to migrate the database structure to the latest version:
flask db upgrade
Notion Integration Configuration Address. When performing a private deployment, set the following configurations:
NOTION_INTEGRATION_TYPE
: This value should be configured as public/internal. Since Notion’s OAuth redirect address only supports https, use Notion’s internal integration for local deployment.NOTION_CLIENT_SECRET
: Notion OAuth client secret (for public integration type).NOTION_CLIENT_ID
: OAuth client ID (for public integration type).NOTION_INTERNAL_SECRET
: Notion internal integration secret. If the value of NOTION_INTEGRATION_TYPE
is internal, configure this variable.Modify it in the tenants
table of the database.
Find the APP_WEB_URL
configuration domain in docker_compose.yaml
.
Back up the database, configured storage, and vector database data. If deployed using Docker Compose, directly back up all data in the dify/docker/volumes
directory.
127.0.0.1 is the internal address of the container. Dify’s configured server address needs to be the host’s local network IP address.
Refer to the official website Environment Variables Documentation for configuration.
In the local deployment version, invite members via email. After entering the email and sending the invitation, the page will display an invitation link. Copy the invitation link and forward it to the user. The user can open the link, log in via email, set a password, and log in to your space.
Refer to the official website Environment Variables Documentation for configuration, and the related Issue.
If port 80 is occupied, stop the service occupying port 80 or modify the port mapping in docker-compose.yaml
to map port 80 to another port. Typically, Apache and Nginx occupy this port, which can be resolved by stopping these two services.
Since OpenAI TTS implements audio stream segmentation, ffmpeg needs to be installed for source code deployment to work properly. Detailed steps:
Windows:
Ubuntu:
sudo apt-get update
, then sudo apt-get install ffmpeg
.CentOS:
sudo yum install epel-release
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum update
sudo yum install ffmpeg ffmpeg-devel
Mac OS X:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install ffmpeg
Download the complete project, navigate to the docker directory, and execute docker-compose up -d
.
To migrate from Weaviate to another vector database, follow these steps:
For local source code deployment:
.env
fileVECTOR_STORE=qdrant
to migrate to QdrantFor Docker Compose deployment:
docker-compose.yaml
Tested target database:
In the community edition’s docker-compose.yaml
, you might notice some services configured with SSRF_PROXY
and HTTP_PROXY
environment variables, all pointing to an ssrf_proxy
container. This is to prevent SSRF attacks. For more information on SSRF attacks, you can read this article.
To avoid unnecessary risks, we configure a proxy for all services that might cause SSRF attacks and force services like Sandbox to only access external networks through the proxy, ensuring your data and service security. By default, this proxy does not intercept any local requests, but you can customize the proxy behavior by modifying the squid
configuration file.
In docker/volumes/ssrf_proxy/squid.conf
, you can find the squid
configuration file. You can customize the proxy behavior here, such as adding ACL rules to restrict proxy access or adding http_access
rules to restrict proxy access. For example, your local network can access the 192.168.101.0/24
segment, but 192.168.101.19
has sensitive data that you don’t want local deployment Dify users to access, but other IPs can. You can add the following rules in squid.conf
:
This is just a simple example. You can customize the proxy behavior according to your needs. If your business is more complex, such as needing to configure an upstream proxy or cache, you can refer to the squid configuration documentation for more information.
Currently, it is not supported to set your created application as a template. The existing templates are provided by Dify official for cloud version users to refer to. If you are using the cloud version, you can add applications to your workspace or customize them after modification to create your own applications. If you are using the community version and need to create more application templates for your team, you can contact our business team for paid technical support: business@dify.ai
This is because Nginx is forwarding the service to the wrong location. First, ensure the container is running, then run the following command with root privileges:
Find these two lines in the output:
Remember the IP addresses. Then open the directory where you store the Dify source code, open dify/docker/nginx/conf.d
, replace http://api:5001
with http://172.19.0.7:5001
, and replace http://web:3000
with http://172.19.0.5:3000
, then restart the Nginx container or reload the configuration.
These IP addresses are examples, you must execute the command to get your own IP addresses, do not fill them in directly. You might need to reconfigure the IP addresses when restarting the relevant containers.
Find the CSP_WHITELIST
parameter in the .env
configuration file and enter the domain names that you can allow, such as all URLs and API request addresses related to product use.
This behavior helps reduce potential XSS attacks. For more information on CSP recommendations, see Content Security Policy.
The API service port is consistent with the one used by the Dify platform. You can reassign the running port by modifying the nginx
configuration in the docker-compose.yaml
file.
To migrate files from local storage to cloud storage (e.g., Alibaba Cloud OSS), you’ll need to transfer data from the ‘upload_files’ and ‘privkeys’ folders. Follow these steps:
Configure Storage Settings
For local source code deployment:
.env
fileSTORAGE_TYPE=aliyun-oss
For Docker Compose deployment:
docker-compose.yaml
STORAGE_TYPE: aliyun-oss
Execute Migration Commands
For local source code:
For Docker Compose:
Dify does not automatically delete old logs in database or unused files on storage. Instead, several commands are provided for instance administrators to manually delete old logs and unused files.
Deleting old logs
You can delete old logs by specifying the number of days using the clear-free-plan-tenant-expired-logs
command. For example, to delete logs older than 30 days, run the following command:
Gather the tenant ID
618b5d66-a1f5-4b6b-8d12-f171182a1cb2
.Delete old logs by specifying the tenant ID and number of days
--tenant_ids
option to specify the tenant ID.--days
option will be deleted.(Optional) Remove exported free_plan_tenant_expired_logs
directory
flask clear-free-plan-tenant-expired-logs
command first exports any logs marked for deletion to the free_plan_tenant_expired_logs
directory before actually deleting them. If you want to free up storage space, it’s a good idea to delete this directory afterward.free_plan_tenant_expired_logs
directory may vary depending on your storage type. The above command is an example for environments with the default settings.If you’d like to reduce storage usage even further, you might consider reclaiming storage for the database (e.g. VACUUM
for PostgreSQL) as well.
Deleting unused files
You can delete unused files using the clear-orphaned-file-records
command and the remove-orphaned-files-on-storage
command.
Since not all patterns have been fully tested, please note that these command may delete unintended file records or files. Please make sure to back up your database and storage before proceeding. It is also recommended to run this during the maintenance window, as this may cause high load on your instance.
In the current implementation, deleting unused files is only supported when the storage type is OpenDAL (when the environment variable STORAGE_TYPE
is set to opendal
).
If you are using a storage type other than OpenDAL, you will need to manually delete unused files or help implement the scan
method for the storage interface.
If you want to skip the confirmation prompt, you can use the --force
(-f
) option for both commands.
Delete unused file records from the database
Delete files from storage that do not exist in the database
Edit this page | Report an issue