This document provides detailed guidance on creating model provider plugins, including project initialization, choosing model configuration methods (predefined models and custom models), creating provider configuration YAML files, and the complete process of writing provider code.
dify
and copied it to the /usr/local/bin
path, you can run the following command to create a new plugin project:
LLM
type plugin template.
predefined-model
Predefined Models
Common large model types that only require unified provider credentials to use the predefined models under the provider. For example, the OpenAI
model provider offers a series of predefined models such as gpt-3.5-turbo-0125
and gpt-4o-2024-05-13
. For detailed development instructions, please refer to Integrating Predefined Models.
customizable-model
Custom Models
Requires manually adding credential configurations for each model. For example, Xinference
supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to integrate both, you need to configure a model_uid for each model. For detailed development instructions, please refer to Integrating Custom Models.
predefined-model
+ customizable-model
or predefined-model
. This means that with configured unified provider credentials, you can use predefined models and models fetched from remote sources, and if you add new models, you can additionally use custom models on top of this foundation.
/providers
path.
Here’s an example of the anthropic.yaml
configuration file for Anthropic
:
OpenAI
providing fine-tuned models, you need to add the model_credential_schema
field.
Here’s a sample code for the OpenAI
family of models:
anthropic.py
, in the /providers
folder and implement a class
that inherits from the __base.provider.Provider
base class, e.g., AnthropicProvider
.
Here’s an example code for Anthropic
:
__base.model_provider.ModelProvider
base class and implement the validate_provider_credentials
method for validating unified provider credentials.
validate_provider_credentials
implementation, and reuse it directly after the model credential verification method is implemented.
Xinference
, you can skip the full implementation step. Simply create an empty class called XinferenceProvider
and implement an empty validate_provider_credentials
method in it.
Detailed Explanation:
• XinferenceProvider
is a placeholder class used to identify custom model providers.
• While the validate_provider_credentials
method won’t be actually called, it must exist because its parent class is abstract and requires all child classes to implement this method. By providing an empty implementation, we can avoid instantiation errors that would occur from not implementing the abstract method.