Provider Configuration Methods
Providers support three configuration models: Predefined Model This indicates that users only need to configure unified provider credentials to use the predefined models under the provider. Customizable Model Users need to add credentials configuration for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to connect both, you need to configure a model_uid for each model. Fetch from Remote Similar to thepredefined-model configuration method, users only need to configure unified provider credentials, and the models are fetched from the provider using the credential information.
For instance, with OpenAI, we can fine-tune multiple models based on gpt-turbo-3.5, all under the same api_key. When configured as fetch-from-remote, developers only need to configure a unified api_key to allow Dify Runtime to fetch all the developer’s fine-tuned models and connect to Dify.
These three configuration methods can coexist, meaning a provider can support predefined-model + customizable-model or predefined-model + fetch-from-remote, etc. This allows using predefined models and models fetched from remote with unified provider credentials, and additional custom models can be used if added.
Configuration Instructions
Terminologymodule: Amoduleis a Python Package, or more colloquially, a folder containing an__init__.pyfile and other.pyfiles.
- Create a provider YAML file and write it according to the Provider Schema.
- Create provider code and implement a
class. - Create corresponding model type
modulesunder the providermodule, such asllmortext_embedding. - Create same-named code files under the corresponding model
module, such asllm.py, and implement aclass. - If there are predefined models, create same-named YAML files under the model
module, such asclaude-2.1.yaml, and write them according to the AI Model Entity. - Write test code to ensure functionality is available.
Let’s Get Started
To add a new provider, first determine the provider’s English identifier, such asanthropic, and create a module named after it in model_providers.
Under this module, we need to prepare the provider’s YAML configuration first.
Preparing Provider YAML
Taking Anthropic as an example, preset the basic information of the provider, supported model types, configuration methods, and credential rules.
OpenAI which provides fine-tuned models, we need to add model_credential_schema. Taking OpenAI as an example:
model_providers directory.
Implement Provider Code
We need to create a Python file with the same name under model_providers, such as anthropic.py, and implement a class that inherits from the __base.provider.Provider base class, such as AnthropicProvider.
Custom Model Providers
For providers like Xinference that offer custom models, this step can be skipped. Just create an empty XinferenceProvider class and implement an empty validate_provider_credentials method. This method will not actually be used and is only to avoid abstract class instantiation errors.
__base.model_provider.ModelProvider base class and implement the validate_provider_credentials method to validate the provider’s unified credentials. You can refer to AnthropicProvider.
validate_provider_credentials implementation first and directly reuse it after implementing the model credential validation method.
Adding Models
Adding Predefined Models👈🏻
For predefined models, we can connect them by simply defining a YAML file and implementing the calling code.
Adding Custom Models 👈🏻
For custom models, we only need to implement the calling code to connect them, but the parameters they handle may be more complex.
Testing
To ensure the availability of the connected provider/model, each method written needs to have corresponding integration test code written in thetests directory.
Taking Anthropic as an example.
Before writing test code, you need to add the credential environment variables required for testing the provider in .env.example, such as: ANTHROPIC_API_KEY.
Before executing, copy .env.example to .env and then execute.
Writing Test Code
Create a module with the same name as the provider under the tests directory: anthropic, and continue to create test_provider.py and corresponding model type test py files in this module, as shown below: