http://your_server_ip
Note: The âDeepSeekâ option in Model Providers refers to the online API service, whereas the Ollama option is used for a locally deployed DeepSeek model.Configure the Model: ⢠Model Name: Enter the deployed model name, e.g.,
deepseek-r1:7b
.
⢠Base URL: Set the Ollama clientâs local service URL, typically http://your_server_ip:11434
. If you encounter connection issues, please refer to the FAQ;
⢠Other settings: Keep default values. According to the DeepSeek model specifications, the max token length is 32,768.
deepseek-r1:7b
model under Ollama in the Model Provider section.Chatflow / Workflow applications enable the creation of more complex AI solutions, such as document recognition, image processing, and speech recognition. For more details, please check the Workflow Documentation.
deepseek-r1:7b
model under Ollama, and use the {{#sys.query#}}
variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via Load Balancing or the Error Handling node.{{#sys.query#}}
variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via Load Balancing or the Error Handling node.launchctl
:
launchctl setenv
.
host.docker.internal
. Therefore, replacing localhost
with host.docker.internal
in the service will make it work effectively.
systemctl
:
systemctl edit ollama.service
. This will open an editor.
Environment
under section [Service]
:
systemd
and restart Ollama:
OLLAMA_HOST
, OLLAMA_MODELS
, etc.ollama
from a new terminal window.OLLAMA_HOST
environment variable.