Link to plugin page: https://zeroqode.com/plugin/1689633204183x212637344857662180
Demo to preview the plugin:
Live Demo: https://ezcodedemo2.bubbleapps.io/openai
Introduction
With OpenAI & chatGPT, you gain access to various functionalities:
- Get a list of available OpenAI models.
- Employ the moderation endpoint as a tool to verify content compliance with OpenAI's content policy. More details can be found at: Moderation API Reference
- Edit text by providing a prompt and an instruction, and the model will return an edited version of the prompt. For more information, visit: Edit Text API Reference
- Generate images using prompts and/or input images, with the model producing new images. Check out: Generate Images API Reference
- Employ the Generate Completion feature for a multitude of tasks, such as answering questions based on existing knowledge, translating complex text into simpler concepts, converting text into programmatic commands, creating code to call the Stripe API, constructing tables from long-form text, determining the time complexity of a function, detecting sentiment from status updates, extracting keywords from text, writing ad copy for product descriptions, summarizing text with a 'tl;dr:', engaging in QA-style chatbot interactions, generating color descriptions from text, crafting short horror stories from topics, engaging in open-ended conversations with an AI assistant, creating interview questions, generating restaurant reviews from a few words, simulating text message conversations, extracting contact information from text, creating product names based on examples, classifying items into categories via examples, translating English text into French, Spanish, and Japanese, and correcting sentences into standard English. Additionally, you can use it to compose Email responses.
Prerequisites
To use this plugin, you need to register on the OpenAI dashboard and insert your keys into the plugin
How to Setup
- Create an account: https://platform.openai.com/
- Create an API keys: https://platform.openai.com/account/api-keys
- Copy the key and set it to in the plugin settings with the word key “Bearer”
Plugin Data/Action Calls
1. Generate Completion - generating a response or output using GPT-3.5 Turbo, which is a version of OpenAI's powerful language model, GPT-3. When you provide input or prompt to GPT-3.5, it processes the information and generates a completion, which is the model's coherent and contextually relevant response to the given input. This completion can be a piece of text, an answer to a question, or any other form of written output that the model generates based on the input provided.
Param Name | Description | Type |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Top_p | An alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Optional
Defaults to 1 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Optional
Defaults to 0 |
Prompt | It is the text that, you give for processing | Text |
Model | Model name ex. text-davinci-003 | text/Optional |
- Generate Image - The image generations endpoint allows you to create an original image given a text prompt. Generated images can have a size of 256x256, 512x512, or 1024x1024 pixels. Smaller sizes are faster to generate. You can request 1-10 images at a time using the number parameter.
Param Name | Description | Type |
Describe image | A text description of the desired image(s). The maximum length is 1000 characters | Text |
Number of Images | The number of images to generate. Must be between 1 and 10. | Integer
Optional
Defaults to 1 |
Image Resolution | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 . | Text
Optional
Defaults to 1024x1024 |
- Get Models - lists the currently available models, and provides basic information about each one such as the owner and availability.
Returns a list of model objects.
- Edit Text
Correct a sentence or a text if it needs grammatical corrections
This action is based on GPT-3.5 module
Param Name | Description | Type |
Hash | hash is a field that receives a dynamic value, for example, the current time, to ensure that the calls are always updated | Text |
Max_tokens | The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | Integer
Defaults to 16 |
Temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | Number
Defaults to 1 |
Top_p | An alternative to sampling with temperature, is called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | Number
Defaults to 1 |
Presence_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | Number
Defaults to 0 |
Frequency_penalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | Number
Defaults to 0 |
Prompt | It is the text that, you give for processing | Text |
user | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse | Text
Optional |
Language | The language in which the text is written. | Text |
- Moderation Text - Classifies if text violates OpenAI's Content Policy
Param Name | Description | Type |
Input | The input text to classify | Text |
Returns a moderation object
6. Create Assistants (Beta)
Build assistants that can call models and use tools to perform tasks.
Represents an
assistant
that can call the model and use tools.
Create an assistant with a model and instructions.Param Name | Description | Type |
instructions | The system instructions that the assistant uses. The maximum length is 32768 characters. | Text |
types | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter , retrieval , or function . | Text |
description | The description of the assistant. The maximum length is 512 characters. | Text |
name | The name of the assistant. The maximum length is 256 characters. | Text |
model | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Text |
7. Modify assistant (Beta).
Modifies an assistant.
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify. | Text |
instructions | The system instructions that the assistant uses. The maximum length is 32768 characters. | Text |
types | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter , retrieval , or function . | Text |
description | The description of the assistant. The maximum length is 512 characters. | Text |
name | The name of the assistant. The maximum length is 256 characters. | Text |
model | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Text |
8. Get the list of assistants (Beta).
9. Delete assistants (Beta).
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify | Text |
9. Retrieve assistant (Beta).
Param Name | Description | Type |
assistant_id | The ID of the assistant to modify | Text |
10. Create thread (Beta).
Create threads that assistants can interact with.
11. Create messages (Beta).
Create messages within threads
Param Name | Description | Type |
thread_id | The thread ID that this message belongs to. | Text |
message | The content of the message in array of text and/or images. | Text |
12. Create run (Beta).
Represents an execution run on a thread.
Param Name | Description | Type |
thread_id | The thread ID that this message belongs to. | Text |
assistant_id | The ID of the assistant used for execution of this run. | Text |
Workflow example
- Set an event to trigger a desired action, for example Generate Completion
- Set a custom state and store the result of the action
- Display the result stored in the custom state