This page is being phased out as part of our documentation reorganization.
Click this card to be redirected to the updated version with the most current information.
If you notice any discrepancies or areas needing improvement in the new documentation, please use the “Report an issue” button at the bottom of the page.
__base.model_provider.ModelProvider
base class, implement the following endpoint:
provider_credential_schema
, such as passing in api_key
. If validation fails, throw the errors.validate.CredentialsValidateFailedError
error.
Note: Predefined models must fully implement this interface, while custom model providers can implement it simply as follows:
model
(string): Model namecredentials
(object): Credential information Credential parameters are defined by the provider’s YAML configuration file’s provider_credential_schema
or model_credential_schema
, such as passing in api_key
. If validation fails, throw the errors.validate.CredentialsValidateFailedError
error.InvokeConnectionError
: Invocation connection errorInvokeServerUnavailableError
: Invocation service unavailableInvokeRateLimitError
: Invocation rate limit reachedInvokeAuthorizationError
: Invocation authentication failedInvokeBadRequestError
: Incorrect invocation parametersInvokeConnectionError
and other exceptions.
__base.large_language_model.LargeLanguageModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential information Credential parameters are defined by the provider’s YAML configuration file’s provider_credential_schema
or model_credential_schema
, such as passing in api_key
prompt_messages
(array[PromptMessage]): Prompt list
model_parameters
(object): Model parameters defined by the model’s YAML configuration’s parameter_rules
tools
(array[PromptMessageTool]) [optional]: Tool list, equivalent to function calling functionsstop
(array[string]) [optional]: Stop sequences. Model output will stop before the defined stringstream
(bool): Whether to stream output, default True. Streaming returns Generator[LLMResultChunk], non-streaming returns LLMResultuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse__base.text_embedding_model.TextEmbeddingModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential information Credential parameters are defined by the provider’s YAML configuration file’s provider_credential_schema
or model_credential_schema
texts
(array[string]): Text list, can be processed in batchuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse_get_num_tokens_by_gpt2(text: str)
method in the AIModel base class.
__base.rerank_model.RerankModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential informationquery
(string): Search query contentdocs
(array[string]): List of segments to be re-rankedscore_threshold
(float) [optional]: Score thresholdtop_n
(int) [optional]: Take top n segmentsuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse__base.speech2text_model.Speech2TextModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential informationfile
(File): File streamuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse__base.text2speech_model.Text2SpeechModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential informationcontent_text
(string): Text content to be convertedstreaming
(bool): Whether to stream outputuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse__base.moderation_model.ModerationModel
base class, implement the following interfaces:
model
(string): Model namecredentials
(object): Credential informationtext
(string): Text contentuser
(string) [optional]: Unique user identifier to help providers monitor and detect abuse