Introduction
Enhance your Bubble app with by far the most powerful AI created.
The full list of supported features includes:
- Composing short or long-form content
- Chatting
- Grammar correction
- Translations from one language to another (25+ languages supported)
- Image Generation and Editing
- Write code and help with tracking down errors in code (Go, JavaScript, Perl, PHP, Ruby, Swift, TypeScript, SQ)
Plugin comes with Chat Bot powered by GPT-4 that offers several benefits, including its high accuracy, improved contextual understanding, multilingual support, scalability, personalization, and increased efficiency. But that's not all - our plugin can even process images to provide relevant and accurate information!
Use your own API keys obtained from OpenAI: https://openai.com/api/
New features of the GPT-4 model:
- Can generate text of 25,000 words at a time
- Handles image input
- Passes tests and passes exams with fairly high scores
- Ability to write code in different programming languages
- Safer and more responsive for users
- Better advanced thinking. Stronger, more creative and smarter
To see many more examples of how you can use this plugin please visit the examples here: https://beta.openai.com/examples
As you can see from the link above here are some other possible use cases:
- Classify items into categories via the example
- Convert movie titles into emoji
- Turn a product description into ad copy
- A message-style chatbot that can answer questions about using JavaScript
- Extract contact information from a block of text.
- Convert the first-person POV to the third-person
- Turn a few words into a restaurant review and much more!
How to Setup
- Create an account: https://platform.openai.com/
- Create an API keys: https://platform.openai.com/account/api-keys
- Copy the key and set it to in the plugin settings with the word key “Bearer”
Plugin Data/Action Calls
1. Keywords
Extract keywords from a block of text. At a lower temperature, it picks keywords from the text. At a higher temperature, it will generate related keywords which can be helpful for creating search indexes.
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Text | It is the text that, you give for processing | Text |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
| Number
Defaults to 1 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Top_p | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
2. Summarize text like for 2nd grader
Translates difficult text into simpler concepts.
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Text | It is the text that, you give for processing | Text |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Top_p | An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
userID | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
3. Translator
Open AI can help you translate text into over 50+ languages, including English, Spanish, French, German, Chinese, Japanese, Korean, Russian, Portuguese, Italian, Dutch, Swedish, and many more. However, please note that translations may not be perfect, as I am an AI language model and not a professional human translator.
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Text | It is the text that, you give for processing | Text |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Presence_penalty | A number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Top_p | An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
userID | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
Desired_language | Open AI alone detects the language in which you write, just enter in which language you want the translation effect to be. | Text |
4. Creates a completion
“Creates a completion” refers to generating a response or output using GPT-3.5 Turbo, which is a version of OpenAI's powerful language model, GPT-3. When you provide input or prompt to GPT-3.5, it processes the information and generates a completion, which is the model's coherent and contextually relevant response to the given input. This completion can be a piece of text, an answer to a question, or any other form of written output that the model generates based on the input provided.
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Content | It is the text that, you give for processing | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
N | How many completions to generate for each prompt. | Integer
Optional
Defaults to 1 |
Top_p | An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Optional
Defaults to 1 |
stop | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. | Text
Optional
Defaults to null |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Optional
Defaults to 0 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
5. Grammar correction
Correct a sentence or a text if it needs grammatical corrections
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Top_p | An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
Prompt | It is the text that, you give for processing | Text |
user | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
Language | The language in which the text is written. | Text |
6. Summarization
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Prompt | It is the text that, you give for processing | |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Top_p | An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
user | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
7. Chat
Given a chat conversation, the model will return a chat completion response.
This action is based on GPT-4 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Role | Where each message has a role (either "system", "user", or "assistant") | Text |
Content | The messages you want to write to the Open AI assistant | Text |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
N | How many chat completion choices to generate for each input message. | Integer
Optional
Defaults to 1 |
Stream | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events
as they become available, with the stream terminated by a data: [DONE]
message. See the OpenAI Cookbook for example code
. | Boolean
Defaults to false |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
user | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
8. Code generator
This call help you with writing code
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Prompt | Write in natural language what code you want to be generated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Programming language | Programming language do you want the code to be generated | Text |
9. Image generations
The image generations endpoint allows you to create an original image given a text prompt. Generated images can have a size of 256x256, 512x512, or 1024x1024 pixels. Smaller sizes are faster to generate. You can request 1-10 images at a time using the n parameter.
Param Name | Description | Type |
Prompt | A text description of the desired image(s). The maximum length is 1000 characters | Text |
N | The number of images to generate. Must be between 1 and 10. | Integer
Optional
Defaults to 1 |
Size | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 . | Text
Optional
Defaults to 1024x1024 |
Response_format | The format in which the generated images are returned. Must be one of url or b64_json . | Text
Optional
Defaults to url |
10. Image Edits
The image edits endpoint allows you to edit and extend an image by uploading a mask. The transparent areas of the mask indicate where the image should be edited, and the prompt should describe the full new image, not just the erased area.
The uploaded image and mask must both be square PNG images less than 4MB in size and also must have the same dimensions as each other. The non-transparent areas of the mask are not used when generating the output, so they don’t necessarily need to match the original image like the example above.
This action is based on DALL·E module
.
Param Name | Description | Type |
Image | The image to edit. Must be a valid PNG file, less than 4MB, and square. If the mask is not provided, the image must have transparency, which will be used as the mask. | File |
Mask | An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image
should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image | File
Optional |
Prompt | A text description of the desired image(s). The maximum length is 1000 characters | Text |
N | The number of images to generate. Must be between 1 and 10. | Integer
Optional
Defaults to 1 |
Size | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 . | Text
Optional
Defaults to 1024x1024 |
Response_format | The format in which the generated images are returned. Must be one of url or b64_json . | Text
Optional
Defaults to url |
User | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. | Text
Optional |
11. Image Variation
The image variations endpoint allows you to generate a variation of a given image.
This action is based on DALL·E module
Param Name | Description | Type |
Image | The image to edit. Must be a valid PNG file, less than 4MB, and square. If the mask is not provided, the image must have transparency, which will be used as the mask. | File |
N | The number of images to generate. Must be between 1 and 10. | Integer
Optional
Defaults to 1 |
Size | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 for dall-e-2 . Must be one of 1024x1024 , 1792x1024 , or 1024x1792 for dall-e-3 models. | Text
Optional
Defaults to 1024x1024 |
Response_format | The format in which the generated images are returned. Must be one of url or b64_json . | Text
Optional
Defaults to url |
style | The style of the generated images. Must be one of vivid or natural . Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3 . | Text
Optional |
11. Fine-tuning
Manage fine-tuning jobs to tailor a model to your specific training data.
This action is based on API
Param Name | Description | Type |
training_file | The ID of an uploaded file that contains training data.
See upload file action.
Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose of fine-tune.
See the fine-tuning guide for more details. | Text |
model | The name of the model to fine-tune. You can select one of the supported models. | Text |
12. Upload file
Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.
The size of individual files can be a maximum of 512 MB. See the Assistants Tools guide to learn more about the types of files supported.
The Fine-tuning API only supports
.jsonl
files.
Param Name | Description | Type |
file | The File object (not file name) to be uploaded. | file |
purpose | The intended purpose of the uploaded file.
Use "fine-tune" for Fine-tuning and "assistants" for Assistants and Messages. This allows us to validate the format of the uploaded file is correct for fine-tuning. | Text |
13. Text to speech.
Generates audio from the input text.
Param Name | Description | Type |
model | Text | |
input | The text to generate audio for. The maximum length is 4096 characters. | Text |
voice | The voice to use when generating the audio. Supported voices are alloy , echo , fable , onyx , nova , and shimmer . Previews of the voices are available in the Text to speech guide. | Text |
response_format | The format to audio in. Supported formats are mp3 , opus , aac , and flac . | String
Optional
Defaults to “mp3” |
speed | The speed of the generated audio. Select a value from 0.25 to 4.0 . 1.0 is the default. | number |
14. Create transcription from audio.
Transcribes audio into the input language.
Param Name | Description | Type |
file | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. | file |
15. Create translates audio into English.
Translates audio into English.
Param Name | Description | Type |
file | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. | file |
16. Vision - Image understanding.
This action The will process image and use the information answer the question.
Assistants Beta. 16. Create Assistants (Beta)
Build assistants that can call models and use tools to perform tasks.
Represents an
assistant
that can call the model and use tools.
Create an assistant with a model and instructions.Param Name | Description | Type |
instructions | The system instructions that the assistant uses. The maximum length is 32768 characters. | Text |
types | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter , retrieval , or function . | Text |
description | The description of the assistant. The maximum length is 512 characters. | Text |
name | The name of the assistant. The maximum length is 256 characters. | Text |
model | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Text |
17. Modify assistant (Beta).
Modifies an assistant.
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify. | Text |
instructions | The system instructions that the assistant uses. The maximum length is 32768 characters. | Text |
types | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter , retrieval , or function . | Text |
description | The description of the assistant. The maximum length is 512 characters. | Text |
name | The name of the assistant. The maximum length is 256 characters. | Text |
model | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Text |
18. Get the list of assistants (Beta).
18. Delete assistants (Beta).
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify | Text |
19. Retrieve assistant (Beta).
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify | Text |
20. Create thread (Beta).
Create threads that assistants can interact with.
21. Create messages (Beta).
Create messages within threads
Param Name | Description | Type |
thread_id | The thread ID that this message belongs to. | Text |
message | The content of the message in array of text and/or images. | Text |
22. Create run (Beta).
Represents an execution run on a thread.
Param Name | Description | Type |
thread_id | The thread ID that this message belongs to. | Text |
assistant_id | The ID of the assistant used for execution of this run. | Text |
Demo to preview the settings
Changelogs
Update: 14.04.23 - Version 1.7.0
- Added new action GPT-3.5-turbo
Update: 10.05.23 - Version 1.8.0
- The 'image generator' action has been initialized
Update: 12.09.23 - Version 1.11.0
- Minor updates
Update: 10.11.23 - Version 1.15.0
- Updated “Model”
Update: 16.11.23 - Version 1.16.0
- Add new actions "Assistant API", "Fine tuning API', "Vision API", "Text to speech API", "speech to text API
Update: 27.11.23 - Version 1.19.0
- Added Retrieve Run API Call. Fixed Сreate completion, Grammar correction API Calls.