After completing the supplier integration, the next step is to integrate the models under the supplier.
First, we need to determine the type of model to be integrated and create the corresponding model type module in the directory of the respective supplier.
The currently supported model types are as follows:
llm Text Generation Model
text_embedding Text Embedding Model
rerank Rerank Model
speech2text Speech to Text
tts Text to Speech
moderation Moderation
Taking Anthropic as an example, Anthropic only supports LLM, so we create a module named llm in model_providers.anthropic.
For predefined models, we first need to create a YAML file named after the model under the llmmodule, such as: claude-2.1.yaml.
Preparing the Model YAML
model:claude-2.1# Model identifier# Model display name, can be set in en_US English and zh_Hans Chinese. If zh_Hans is not set, it will default to en_US.# You can also not set a label, in which case the model identifier will be used.label:en_US:claude-2.1model_type:llm# Model type, claude-2.1 is an LLMfeatures:# Supported features, agent-thought supports Agent reasoning, vision supports image understanding- agent-thoughtmodel_properties:# Model propertiesmode:chat# LLM mode, complete for text completion model, chat for dialogue modelcontext_size:200000# Maximum context size supportedparameter_rules:# Model invocation parameter rules, only LLM needs to provide- name:temperature# Invocation parameter variable name# There are 5 preset variable content configuration templates: temperature/top_p/max_tokens/presence_penalty/frequency_penalty# You can set the template variable name directly in use_template, and it will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE# If additional configuration parameters are set, they will override the default configurationuse_template:temperature- name:top_puse_template:top_p- name:top_klabel:# Invocation parameter display namezh_Hans:取样数量en_US:Top ktype:int# Parameter type, supports float/int/string/booleanhelp:# Help information, describes the parameter's functionzh_Hans:仅从每个后续标记的前 K 个选项中采样。en_US:Only sample from the top K options for each subsequent token.required:false# Whether it is required, can be omitted- name:max_tokens_to_sampleuse_template:max_tokensdefault:4096# Default parameter valuemin:1# Minimum parameter value, only applicable to float/intmax:4096# Maximum parameter value, only applicable to float/intpricing:# Pricing informationinput:'8.00'# Input unit price, i.e., Prompt unit priceoutput:'24.00'# Output unit price, i.e., return content unit priceunit:'0.000001'# Price unit, the above price is per 100Kcurrency:USD# Price currency
It is recommended to prepare all model configurations before starting the implementation of the model code.
Similarly, you can refer to the YAML configuration information in the directories of other suppliers under the model_providers directory. The complete YAML rules can be found in: .
Implementing Model Invocation Code
Next, create a Python file with the same name llm.py under the llmmodule to write the implementation code.
Create an Anthropic LLM class in llm.py, which we will name AnthropicLargeLanguageModel (name can be arbitrary), inheriting from the __base.large_language_model.LargeLanguageModel base class, and implement the following methods:
LLM Invocation
Implement the core method for LLM invocation, supporting both streaming and synchronous responses.
def_invoke(self,model:str,credentials:dict,prompt_messages: list[PromptMessage],model_parameters:dict,tools: Optional[list[PromptMessageTool]]=None,stop: Optional[List[str]]=None,stream:bool=True,user: Optional[str]=None) \-> Union[LLMResult, Generator]:""" Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """
When implementing, note to use two functions to return data, one for handling synchronous responses and one for streaming responses. Since Python recognizes functions containing the yield keyword as generator functions, returning a fixed data type of Generator, synchronous and streaming responses need to be implemented separately, like this (note the example below uses simplified parameters, actual implementation should follow the parameter list above):
If the model does not provide a precompute tokens interface, return 0 directly.
defget_num_tokens(self,model:str,credentials:dict,prompt_messages: list[PromptMessage],tools: Optional[list[PromptMessageTool]]=None) ->int:""" Get number of tokens for given prompt messages :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param tools: tools for tool calling :return: """
Model Credentials Validation
Similar to supplier credentials validation, this validates the credentials for a single model.
defvalidate_credentials(self,model:str,credentials:dict) ->None:""" Validate model credentials :param model: model name :param credentials: model credentials :return: """
Invocation Error Mapping Table
When a model invocation error occurs, it needs to be mapped to the InvokeError type specified by Runtime, facilitating Dify to handle different errors differently.
Runtime Errors:
InvokeConnectionError Invocation connection error
InvokeServerUnavailableError Invocation service unavailable
@propertydef_invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:""" Map model invoke error to unified error The key is the error type thrown to the caller The value is the error type thrown by the model, which needs to be converted into a unified error type for the caller. :return: Invoke error mapping """
For interface method descriptions, see: Interfaces, and for specific implementation, refer to: llm.py.