Integrate the Predefined Model
Before accessing a predefined model, make sure you have created a model provider. Accessing predefined models is roughly divided into the following steps:
Create Module Structures by Model Type
Create corresponding sub-modules under the provider module based on model types (such as
llm
ortext_embedding
). Ensure each model type has its own logical layer for easy maintenance and extension.Write Model Request Code
Create a Python file with the same name as the model type (e.g., llm.py) under the corresponding model type module. Define a class that implements specific model logic and complies with the system's model interface specifications.
Add Predefined Model Configuration
If the provider offers predefined models, create
YAML
files named after each model (e.g.,claude-3.5.yaml
). Write file content according to AIModelEntity specifications, describing model parameters and functionality.Test Plugin
Write unit tests and integration tests for newly added provider functionality to ensure all function modules meet expectations and operate normally.
Below are the access details:
1. Creation of different module structures by model type
A model provider may offer different model types, for example, OpenAI provides types such as llm
or text_embedding
. You need to create corresponding sub-modules under the provider module, ensuring each model type has its own logical layer for easy maintenance and extension.
Currently supported model types:
llm
: Text generation modelstext_embedding
: Text Embedding modelsrerank
: Rerank modelsspeech2text
: Speech to texttts
: Text to speechmoderation
: Content moderation
Taking Anthropic
as an example, since its model series only contains LLM type models, you only need to create an /llm
folder under the /models
path and add yaml files for different model versions. For detailed code structure, please refer to the Github repository.
If the model provider contains multiple types of large models, e.g., the OpenAI family of models contains llm and text_embedding, moderation, speech2text, and tts types of models, you need to create a folder for each type under the /models path. The structure is as follows:
It is recommended to prepare all model configurations before starting the model code implementation. For complete YAML rules, please refer to the Model Design Rules. For more code details, please refer to the example Github repository.
2. Writing Model Requesting Code
Next, you need to create an llm.py
code file under the /models
path. Taking Anthropic
as an example, create an Anthropic LLM class in llm.py
named AnthropicLargeLanguageModel
, inheriting from the __base.large_language_model.LargeLanguageModel
base class.
Here's example code for some functionality:
LLM Request
The core method for requesting LLM, supporting both streaming and synchronous returns.
In the implementation, you need to be careful to use two functions to handle synchronized returns and streaming returns separately. This is because functions in Python that contain the yield keyword are recognized as generator functions, and their return type is fixed to Generator. synchronized return and streaming return need to be implemented independently in order to ensure that the logic is clear and to accommodate different return requirements.
Here's the sample code (the parameters are simplified in the example, so please follow the full parameter list in the actual implementation):
Pre-calculated number of input tokens
If the model does not provide an interface to pre-calculate tokens, it can simply return 0, which is used to indicate that the feature is not applicable or not implemented. Example:
Request Exception Error Mapping Table
When a model call encounters an exception, it needs to be mapped to the InvokeError
type specified by Runtime, allowing Dify to handle different errors differently.
Runtime Errors:
InvokeConnectionError
: Connection error during invocationInvokeServerUnavailableError
: Service provider unavailableInvokeRateLimitError
: Rate limit reachedInvokeAuthorizationError
: Authorization failure during invocationInvokeBadRequestError
: Invalid parameters in the invocation request
See the Github code repository for full code details.
3. Add Predefined Model Configurations
If the provides predefined models, create YAML files for each model with the same name as the model name (e.g. claude-3.5.yaml). Write the contents of the file according to the AIModelEntity specification, describing the parameters and functionality of the model.
claude-3-5-sonnet-20240620
Model example code:
4. Debugging Plugins
Dify provides remote debugging method, go to "Plugin Management" page to get the debugging key and remote server address. Check here for more details:
Debug PluginPublishing Plugins
You can now publish your plugin by uploading it to the Dify Plugins code repository! Before uploading, make sure your plugin follows the plugin release guide. Once approved, the code will be merged into the master branch and automatically live in the Dify Marketplace.
Exploring More
Quick Start:
Plugins Specification Definition Documentaiton:
Last updated