This page is being phased out as part of our documentation reorganization.
Click this card to be redirected to the updated version with the most current information.
If you notice any discrepancies or areas needing improvement in the new documentation, please use the “Report an issue” button at the bottom of the page.
llm
or text_embedding
). Ensure each model type has its own logical layer for easy maintenance and extension.
YAML
files named after each model (e.g., claude-3.5.yaml
). Write file content according to AIModelEntity specifications, describing model parameters and functionality.
llm
or text_embedding
. You need to create corresponding sub-modules under the provider module, ensuring each model type has its own logical layer for easy maintenance and extension.
Currently supported model types:
llm
: Text generation modelstext_embedding
: Text Embedding modelsrerank
: Rerank modelsspeech2text
: Speech to texttts
: Text to speechmoderation
: Content moderationAnthropic
as an example, since its model series only contains LLM type models, you only need to create an /llm
folder under the /models
path and add yaml files for different model versions. For detailed code structure, please refer to the GitHub repository.
llm.py
code file under the /models
path. Taking Anthropic
as an example, create an Anthropic LLM class in llm.py
named AnthropicLargeLanguageModel
, inheriting from the __base.large_language_model.LargeLanguageModel
base class.
Here’s example code for some functionality:
InvokeError
type specified by Runtime, allowing Dify to handle different errors differently.
Runtime Errors:
InvokeConnectionError
: Connection error during invocationInvokeServerUnavailableError
: Service provider unavailableInvokeRateLimitError
: Rate limit reachedInvokeAuthorizationError
: Authorization failure during invocationInvokeBadRequestError
: Invalid parameters in the invocation requestclaude-3-5-sonnet-20240620
Model example code: