Creating Model Providers
This document provides detailed guidance on creating model provider plugins, including project initialization, choosing model configuration methods (predefined models and custom models), creating provider configuration YAML files, and the complete process of writing provider code.
The first step in creating a Model type plugin is to initialize the plugin project and create the model provider file, followed by integrating specific predefined/custom models. If you only want to add a new model to an existing model provider, please refer to Quick Integration of a New Model.
Prerequisites
- Dify plugin scaffolding tool
- Python environment, version ≥ 3.12
For detailed instructions on preparing the plugin development scaffolding tool, please refer to Initializing Development Tools. Before you begin, it’s recommended that you understand the basic concepts and structure of Model Plugins.
Create New Project
In the scaffolding command-line tool path, create a new dify plugin project.
If you have renamed the binary file to dify
and copied it to the /usr/local/bin
path, you can run the following command to create a new plugin project:
Choose Model Plugin Template
All templates in the scaffolding tool provide complete code projects. Choose the LLM
type plugin template.
Configure Plugin Permissions
Configure the following permissions for this LLM plugin:
- Models
- LLM
- Storage
Model Configuration Methods Explanation
Model providers support the following two model configuration methods:
-
predefined-model
Predefined ModelsCommon large model types that only require unified provider credentials to use the predefined models under the provider. For example, the
OpenAI
model provider offers a series of predefined models such asgpt-3.5-turbo-0125
andgpt-4o-2024-05-13
. For detailed development instructions, please refer to Integrating Predefined Models. -
customizable-model
Custom ModelsRequires manually adding credential configurations for each model. For example,
Xinference
supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to integrate both, you need to configure a model_uid for each model. For detailed development instructions, please refer to Integrating Custom Models.
These two configuration methods can coexist, meaning a provider can support combinations like predefined-model
+ customizable-model
or predefined-model
. This means that with configured unified provider credentials, you can use predefined models and models fetched from remote sources, and if you add new models, you can additionally use custom models on top of this foundation.
Adding a New Model Provider
Adding a new model provider mainly includes the following steps:
-
Create Model Provider Configuration YAML File
Add a YAML file in the provider directory to describe the provider’s basic information and parameter configuration. Write content according to ProviderSchema requirements to ensure consistency with system specifications.
-
Write Model Provider Code
Create provider class code, implementing a Python class that meets system interface requirements for connecting with the provider’s API and implementing core functionality.
Here are the complete operation details for each step.
1. Create Model Provider Configuration File
Manifest is a YAML format file that declares the model provider’s basic information, supported model types, configuration methods, and credential rules. The plugin project template will automatically generate configuration files under the /providers
path.
Here’s an example of the anthropic.yaml
configuration file for Anthropic
:
If the provider you’re integrating offers custom models, such as OpenAI
providing fine-tuned models, you need to add the model_credential_schema
field.
Here’s a sample code for the OpenAI
family of models:
For more complete model provider YAML specifications, please refer to the Model Schema documentation.
2. Write Model Provider Code
Create a python file with the same name, e.g., anthropic.py
, in the /providers
folder and implement a class
that inherits from the __base.provider.Provider
base class, e.g., AnthropicProvider
.
Here’s an example code for Anthropic
:
Providers need to inherit the __base.model_provider.ModelProvider
base class and implement the validate_provider_credentials
method for validating unified provider credentials.
Of course, you can also initially reserve the validate_provider_credentials
implementation, and reuse it directly after the model credential verification method is implemented.
Custom Model Providers
For other types of model providers, please refer to the following configuration methods.
For custom model providers like Xinference
, you can skip the full implementation step. Simply create an empty class called XinferenceProvider
and implement an empty validate_provider_credentials
method in it.
Detailed Explanation:
• XinferenceProvider
is a placeholder class used to identify custom model providers.
• While the validate_provider_credentials
method won’t be actually called, it must exist because its parent class is abstract and requires all child classes to implement this method. By providing an empty implementation, we can avoid instantiation errors that would occur from not implementing the abstract method.
After initializing the model provider, the next step is to integrate specific llm models provided by the provider. For detailed instructions, please refer to:
- Model Design Rules - Learn the standards for integrating predefined models
- Model Schema - Learn the standards for integrating custom models
- Publishing Overview - Learn the plugin publishing process
Reference Resources
- Quick Integration of a New Model - How to add new models to existing providers
- Basic Concepts of Plugin Development - Return to the plugin development getting started guide
- Creating New Model Provider Extra - Learn more advanced configurations
- General Specifications - Learn about plugin manifest file configuration