Full OpenAI GPT-4 Plugin for Bubble

Introduction

Enhance your Bubble app with by far the most powerful AI created.
The full list of supported features includes:
  • Composing short or long-form content
  • Chatting
  • Grammar correction
  • Translations from one language to another (25+ languages supported)
  • Image Generation and Editing
  • Write code and help with tracking down errors in code (Go, JavaScript, Perl, PHP, Ruby, Swift, TypeScript, SQ)
Plugin comes with Chat Bot powered by GPT-4 that offers several benefits, including its high accuracy, improved contextual understanding, multilingual support, scalability, personalization, and increased efficiency. But that's not all - our plugin can even process images to provide relevant and accurate information!
Use your own API keys obtained from OpenAI: https://openai.com/api/
New features of the GPT-4 model:
  • Can generate text of 25,000 words at a time
  • Handles image input
  • Passes tests and passes exams with fairly high scores
  • Ability to write code in different programming languages
  • Safer and more responsive for users
  • Better advanced thinking. Stronger, more creative and smarter
To see many more examples of how you can use this plugin please visit the examples here: https://beta.openai.com/examples
As you can see from the link above here are some other possible use cases:
  • Classify items into categories via the example
  • Convert movie titles into emoji
  • Turn a product description into ad copy
  • A message-style chatbot that can answer questions about using JavaScript
  • Extract contact information from a block of text.
  • Convert the first-person POV to the third-person
  • Turn a few words into a restaurant review and much more!
📽️
Video tutorial on how to use the plugin: https://www.youtube.com/watch?v=XbKsdw85G2Y&feature=youtu.be

How to Setup

Image without caption
  • Copy the key and set it to in the plugin settings with the word key “Bearer”
Image without caption

Plugin Data/Action Calls

1. Keywords

Extract keywords from a block of text. At a lower temperature, it picks keywords from the text. At a higher temperature, it will generate related keywords which can be helpful for creating search indexes. This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Text
It is the text that, you give for processing
Text
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Top_p
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Defaults to 1
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0

2. Summarize text like for 2nd grader

Translates difficult text into simpler concepts. This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Text
It is the text that, you give for processing
Text
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Top_p
An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Defaults to 1
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0
userID
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
Text Optional

3. Translator

Open AI can help you translate text into over 50+ languages, including English, Spanish, French, German, Chinese, Japanese, Korean, Russian, Portuguese, Italian, Dutch, Swedish, and many more. However, please note that translations may not be perfect, as I am an AI language model and not a professional human translator. This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Text
It is the text that, you give for processing
Text
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
Presence_penalty
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Top_p
An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Defaults to 1
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0
userID
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
Text Optional
Desired_language
Open AI alone detects the language in which you write, just enter in which language you want the translation effect to be.
Text

4. Creates a completion

“Creates a completion” refers to generating a response or output using GPT-3.5 Turbo, which is a version of OpenAI's powerful language model, GPT-3. When you provide input or prompt to GPT-3.5, it processes the information and generates a completion, which is the model's coherent and contextually relevant response to the given input. This completion can be a piece of text, an answer to a question, or any other form of written output that the model generates based on the input provided. This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Content
It is the text that, you give for processing
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
N
How many completions to generate for each prompt.
Integer Optional Defaults to 1
Top_p
An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Optional Defaults to 1
stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Text Optional Defaults to null
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Optional Defaults to 0
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0

5. Grammar correction

Correct a sentence or a text if it needs grammatical corrections This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
Top_p
An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Defaults to 1
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0
Prompt
It is the text that, you give for processing
Text
user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
Text Optional
Language
The language in which the text is written.
Text

6. Summarization

Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Prompt
It is the text that, you give for processing
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
Top_p
An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Number Defaults to 1
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0
user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
Text Optional

7. Chat

Given a chat conversation, the model will return a chat completion response. This action is based on GPT-4 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Role
Where each message has a role (either "system", "user", or "assistant")
Text
Content
The messages you want to write to the Open AI assistant
Text
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Number Defaults to 1
N
How many chat completion choices to generate for each input message.
Integer Optional Defaults to 1
Stream
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events  as they become available, with the stream terminated by a data: [DONE]  message. See the OpenAI Cookbook for example code .
Boolean Defaults to false
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number Defaults to 0
Frequency_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number Defaults to 0
user
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
Text Optional

8. Code generator

This call help you with writing code This action is based on GPT-3.5 module
Image without caption
Param Name
Description
Type
Hash
hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated
Text
Prompt
Write in natural language what code you want to be generated
Text
Max_tokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
Integer Defaults to 16
Programming language
Programming language do you want the code to be generated
Text

9. Image generations

The image generations endpoint allows you to create an original image given a text prompt. Generated images can have a size of 256x256, 512x512, or 1024x1024 pixels. Smaller sizes are faster to generate. You can request 1-10 images at a time using the n parameter.
Image without caption
Param Name
Description
Type
Prompt
A text description of the desired image(s). The maximum length is 1000 characters
Text
N
The number of images to generate. Must be between 1 and 10.
Integer Optional Defaults to 1
Size
The size of the generated images. Must be one of 256x256512x512, or 1024x1024.
Text Optional Defaults to 1024x1024
Response_format
The format in which the generated images are returned. Must be one of urlor b64_json.
Text Optional Defaults to url

10. Image Edits

The image edits endpoint allows you to edit and extend an image by uploading a mask. The transparent areas of the mask indicate where the image should be edited, and the prompt should describe the full new image, not just the erased area. The uploaded image and mask must both be square PNG images less than 4MB in size and also must have the same dimensions as each other. The non-transparent areas of the mask are not used when generating the output, so they don’t necessarily need to match the original image like the example above. This action is based on DALL·E module .
Image without caption
Param Name
Description
Type
Image
The image to edit. Must be a valid PNG file, less than 4MB, and square. If the mask is not provided, the image must have transparency, which will be used as the mask.
File
Mask
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image  should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image
File Optional
Prompt
A text description of the desired image(s). The maximum length is 1000 characters
Text
N
The number of images to generate. Must be between 1 and 10.
Integer Optional Defaults to 1
Size
The size of the generated images. Must be one of 256x256512x512, or 1024x1024.
Text Optional Defaults to 1024x1024
Response_format
The format in which the generated images are returned. Must be one of urlor b64_json.
Text Optional Defaults to url
User
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
Text Optional

11. Image Variation

The image variations endpoint allows you to generate a variation of a given image. This action is based on DALL·E module
Image without caption
Param Name
Description
Type
Image
The image to edit. Must be a valid PNG file, less than 4MB, and square. If the mask is not provided, the image must have transparency, which will be used as the mask.
File
N
The number of images to generate. Must be between 1 and 10.
Integer Optional Defaults to 1
Size
The size of the generated images. Must be one of 256x256512x512, or 1024x1024 for dall-e-2. Must be one of 1024x10241792x1024, or 1024x1792 for dall-e-3 models.
Text Optional Defaults to 1024x1024
Response_format
The format in which the generated images are returned. Must be one of urlor b64_json.
Text Optional Defaults to url
style
The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.
Text Optional

11. Fine-tuning

Manage fine-tuning jobs to tailor a model to your specific training data. This action is based on API
Image without caption
Param Name
Description
Type
training_file
The ID of an uploaded file that contains training data. See upload file action. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose of fine-tune. See the fine-tuning guide for more details.
Text
model
The name of the model to fine-tune. You can select one of the supported models.
Text

12. Upload file

Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.
The size of individual files can be a maximum of 512 MB. See the Assistants Tools guide to learn more about the types of files supported. The Fine-tuning API only supports .jsonl files.
Image without caption
Param Name
Description
Type
file
The File object (not file name) to be uploaded.
file
purpose
The intended purpose of the uploaded file. Use "fine-tune" for Fine-tuning and "assistants" for Assistants and Messages. This allows us to validate the format of the uploaded file is correct for fine-tuning.
Text

13. Text to speech.

Generates audio from the input text.
Image without caption
Param Name
Description
Type
model
One of the available TTS modelstts-1 or tts-1-hd
Text
input
The text to generate audio for. The maximum length is 4096 characters.
Text
voice
The voice to use when generating the audio. Supported voices are alloyechofableonyxnova, and shimmer. Previews of the voices are available in the Text to speech guide.
Text
response_format
The format to audio in. Supported formats are mp3opusaac, and flac.
String Optional Defaults to “mp3”
speed
The speed of the generated audio. Select a value from 0.25 to 4.01.0 is the default.
number

14. Create transcription from audio.

Transcribes audio into the input language.
Image without caption
Param Name
Description
Type
file
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
file

15. Create translates audio into English.

Translates audio into English.
Image without caption
Param Name
Description
Type
file
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
file

16. Vision - Image understanding.

This action The will process image and use the information answer the question.
Image without caption

Assistants Beta. 16. Create Assistants (Beta)

Build assistants that can call models and use tools to perform tasks. Represents an assistant that can call the model and use tools. Create an assistant with a model and instructions.
Image without caption
Param Name
Description
Type
instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
types
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreterretrieval, or function.
Text
description
The description of the assistant. The maximum length is 512 characters.
Text
name
The name of the assistant. The maximum length is 256 characters.
Text
model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text

17. Modify assistant (Beta).

Modifies an assistant.
Image without caption
Param Name
Description
Type
assistant_id
The ID of the assistant to modify.
Text
instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
types
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreterretrieval, or function.
Text
description
The description of the assistant. The maximum length is 512 characters.
Text
name
The name of the assistant. The maximum length is 256 characters.
Text
model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text

18. Get the list of assistants (Beta).

Image without caption

18. Delete assistants (Beta).

Image without caption
Param Name
Description
Type
assistant_id
The ID of the assistant to modify
Text

19. Retrieve assistant (Beta).

Image without caption
Param Name
Description
Type
assistant_id
The ID of the assistant to modify
Text

20. Create thread (Beta).

Create threads that assistants can interact with.
Image without caption

21. Create messages (Beta).

Create messages within threads
Image without caption
Param Name
Description
Type
thread_id
The thread ID that this message belongs to.
Text
message
The content of the message in array of text and/or images.
Text

22. Retrieve message (Beta).

Retrieve a message.
Image without caption
Param Name
Description
Type
thread_id
The thread ID that this message belongs to.
Text
message_id
The content of the message in array of text and/or images.
Text

23. Retrieve messages list (Beta).

Returns a list of messages for a given thread.
Image without caption
Param Name
Description
Type
thread_id
The thread ID that this message belongs to.
Text

24. Create run (Beta).

Represents an execution run on a thread.
Image without caption
Param Name
Description
Type
thread_id
The thread ID that this message belongs to.
Text
assistant_id
The ID of the assistant used for execution of this run.
Text

24. Retrieve run (Beta).

Represents an execution run on a thread.
Image without caption
Param Name
Description
Type
thread_id
The thread ID that this message belongs to.
Text
run_id
The ID of the run to retrieve.
Text

25. Moderations

Given some input text, outputs if the model classifies it as potentially harmful across several categories.
Image without caption
Name
Description
Type
Text
The input text to classify
Text
Model
Two content moderations models are available: text-moderation-stable and text-moderation-latest. The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest.
Text
Return Values:moderation object.

26. Transcription in Verbose JSON format

Transcribes audio into the input language.
Image without caption
Name
Description
Type
File
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
Text
Return Values: The a verbose transcription object.

27. Detecting the language

Detects the language of a text
Image without caption
Name
Description
Type
Model
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
Text
Text
The content to detect language
Text
Return Values: Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.

28. Create Vector Store

Vector stores are used to store files for use by the file_search tool.
Image without caption
Name
Description
Type
Name
The name of the vector store.
Text
Return Values:vector store object.

29. Retrieve File

Returns information about a specific file.
Image without caption
Name
Description
Type
File_id
The ID of the file to use for this request.
Text
Return Values: The File object matching the specified ID.

30. List files

Returns a list of files that belong to the user's organization.
Image without caption
Return Values: A list of File objects.

31. Delete file

Delete a file.
Image without caption
Name
Description
Type
File_id
The ID of the file to use for this request.
Text
Return Values: Deletion status.

32. Retrieve vector store

Retrieves a vector store.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store to retrieve.
Text
Return Values: The vector store object matching the specified ID.

33. Modify vector store

Modifies a vector store.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store to modify.
Text
Name
The name of the vector store.
Text
Return Values: The modified vector store object.

34. Delete vector store

Delete a vector store.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store to delete.
Text
Return Values: Deletion status

35. Create vector store file

Create a vector store file by attaching a File to a vector store.
Image without caption
Name
Description
Type
File_id
A File ID that the vector store should use. Useful for tools like file_search that can access files.
Text
Vector_store_id
The ID of the vector store for which to create a File.
Text
Return Values:
vector store file object.

36. Delete vector store file

Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store that the file belongs to.
Text
File_id
The ID of the file to delete.
Text
Return Values:
Deletion status

37. List vector store files

Returns a list of vector store files.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store that the files belong to.
Text
Return Values: A list of vector store file objects.

38. Retrieve vector store file

Retrieves a vector store file.
Image without caption
Name
Description
Type
Vector_store_id
The ID of the vector store that the file belongs to.
Text
File_id
The ID of the file being retrieved.
Text
Return Values: The vector store file object.

39. Create Assistant (function)

Create an assistant with a model and instructions(function tool)
Image without caption
Name
Description
Type
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
Function
description string Optional A description of what the function does, used by the model to choose when and how to call the function. name string Required The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. parameters object Optional The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format. Omitting parameters defines a function with an empty parameter list. strict boolean or null Optional Defaults to false Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.
Text
Return Values: An assistant object.

40. Modify assistant (function)

Modifies an assistant with function
Image without caption
Name
Description
Type
Assistant_id
The ID of the assistant to modify.
Text
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
Function
Please see the call above for description
Text
Return Values: The modified assistant object.

41. Create Assistant (Code interpreter)

Image without caption
Name
Description
Type
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
File_id
A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool.
Text
Return Values: An assistant object.

42. Modify assistant (Code interpreter)

Image without caption
Name
Description
Type
Assistant_id
The ID of the assistant to modify.
Text
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
File_id
A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool.
Text

44. Create Assistant (File Search)

Image without caption
Name
Description
Type
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
Vector_store_ids
The vector store attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
Text

45. Modify assistant (File Search)

Image without caption
Name
Description
Type
Assistant_id
The ID of the assistant to modify.
Text
Instructions
The system instructions that the assistant uses. The maximum length is 32768 characters.
Text
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text
Name
The name of the assistant. The maximum length is 256 characters.
Text
Description
The description of the assistant. The maximum length is 512 characters.
Text
Vector_store_ids
The vector store attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
Text

Changelogs

Update: 14.04.23 - Version 1.7.0
  • Added new action GPT-3.5-turbo
Update: 10.05.23 - Version 1.8.0
  • The 'image generator' action has been initialized
Update: 12.09.23 - Version 1.11.0
  • Minor updates
Update: 10.11.23 - Version 1.15.0
  • Updated “Model”
Update: 16.11.23 - Version 1.16.0
  • Add new actions "Assistant API", "Fine tuning API', "Vision API", "Text to speech API", "speech to text API
Update: 27.11.23 - Version 1.19.0
  • Added Retrieve Run API Call. Fixed Сreate completion, Grammar correction API Calls.
Update: 28.12.23 - Version 1.20.0
  • Update endpoint
Update: 16.01.24 - Version 1.22.0
  • Minor fix
Update: 15.02.24 - Version 1.24.0
  • Minor fix
Update: 03.04.24 - Version 1.25.0
  • updated description
Update: 14.05.24 - Version 1.26.0.
  • Was added new actions "Retrieve message (Beta)", "Retrieve messages list (Beta)", "Retrieve run (Beta)”
Update: 14.05.24 - Version 1.27.0
  • Plugins upgrade for model "GPT-4o”
Update 06.06.24 - Version 1.28.0
  • Minor update
Update 10.06.24 - Version 1.29.0
  • Minor update
Update 19.06.24 - Version 1.31.0
  • Minor update
Update 24.06.24 - Version 1.32.0
  • Fixed keys
Update 23.07.24 - Version 1.34.0
  • Minor update
Update 30.08.24 - Version 1.35.0
  • Assistant API calls upgraded to V2
Update 24.09.24 - Version 1.36.0
  • Was added Vector Store actions and Moderations
Update 27.09.24 - Version 1.37.0
  • Files and vector stores added to Assistant calls