DeepSeek AI is an advanced plugin that leverages AI technology to optimize search and data analysis. It helps quickly find relevant information, process large datasets, provide personalized recommendations, and automate routine tasks—perfect for professionals seeking smarter solutions.
Our plugin allows you to create a full-fledged chat with the Deepseek artificial intelligence, as well as complete sentences
You need to copy the created api key and paste it into the plugin settings
We need to fund our account so we can use DeepSeek.
Plugin Data Calls
Lists Models
Lists the currently available models, and provides basic information about each one such as the owner and availability. Check Models & Pricing for our currently supported models.
Return values:
Name
Description
Type
ID
The model identifier, which can be referenced in the API endpoints.
Text
object
The object type, which is always "model".
Text
owned_by
The organization that owns the model.
Text
Get User Balance
Get user current balance
Return values:
Name
Description
Type
is_available
Whether the user's balance is sufficient for API calls.
Text
balance_infos
Object Array
currency
The currency of the balance. Possible values: [CNY, USD]
Text
total_balance
The total available balance, including the granted balance and the topped-up balance.
Text
granted_balance
The total not expired granted balance.
Text
topped_up_balance
The total topped-up balance.
Text
Plugin Action Calls
Create Chat Completion
Creates a model response for the given chat conversation.
Fields:
Name
Description
Type
model
ID of the model to use. Possible values: deepseek-chat, deepseek-reasoner
Text
frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Text
max_tokens
Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
If max_tokens is not specified, the default value 4096 is used.
Number
presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number
temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Number
top_p
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Number
logprobs
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
true/false
top_logprobs
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Number
stream
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE] message.
true/false
messages
List of messages in the conversation.
Example: {role: "user", content: "Hi"}
Role possible values: system, user, assistant
Content: the contents of the message.
Text
Return values:
Name
Description
Type
id
A unique identifier for the chat completion.
Text
choices
A list of chat completion choices.
Example:
[
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today? 😊"
},
"logprobs": null,
"finish_reason": "stop"
}
]
List of Objects
created
The Unix timestamp (in seconds) of when the chat completion was created.
Text
model
The model used for the chat completion.
Text
system_fingerprint
This fingerprint represents the backend configuration that the model runs with.
Text
Create FIM Completion (Beta)
Creates a model response for the given chat conversation.
The FIM (Fill-In-the-Middle) Completion API. User must set base_url="https://api.deepseek.com/beta" to use this feature.
The prompt to generate completions for.
Example: Once upon a time,
Text
model
ID of the model to use. Possible values: deepseek-chat, deepseek-reasoner
Text
echo
Echo back the prompt in addition to the completion
true/false
frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
logprobs
Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens. For example, if logprobs is 20, the API will return a list of the 20 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
The maximum value for logprobs is 20.
Number
max_tokens
The maximum number of tokens that can be generated in the completion.
Default: 1024
Number
presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number
stream
Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
true/false
temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Number
top_p
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Number
Return values:
Name
Description
Type
id
A unique identifier for the completion.
Text
choices
The list of completion choices the model generated for the input prompt.
List of Objects
created
The Unix timestamp (in seconds) of when the completion was created.
Text
model
The model used for completion.
Text
system_fingerprint
This fingerprint represents the backend configuration that the model runs with.