Loading...

ChatGPT Prompting

Prompt Engineer

Prompt Generator

This ChatGPT prompt generation tool is designed to assist in prompt engineering, this platform allows users to effortlessly configure the model's parameters for ChatGPT.

How to start:

1. Define the type of Prompt you need, there are 2 types:
    A. Direct Prompt: The user only writes an input with his query.
    B. Contextual Prompt: It carries context information that the model first need, and then an input with instructions.
2. Select the task you want to use as model.
3. Add de the General Context if necessary.
4. Add the input of the query you are going to generate.
3. Define variables and parameters/hiperparameter you want to use.
5. Generate the prompt and copy it to your ChatGPT account.

IMPORTANT: We do not view or store any of your inputs information or your selections, experiment and use it freely !

Task
We have defined some tasks that you can use as a model and replace the data with your own data, experiment!
General Context
The context is very IMPORTANT for an accurate answer, here you enter relevant information that the model must take into account to solve the query.
0 chars / 0 words / 0 est. tokens
0 chars / 0 words / 0 est. tokens
Model
Refer to different iterations or releases of the ChatGPT model. Each version is a specific variation or improvement of the model, typically distinguished by factors like training data, architecture, and performance capabilities.
Language
Records indicate that ChatGPT can translate to more than 95 languages. Just enter the imput and select the language, to indicate at the prompt which language ChatGPT should deliver the response.
Role
LLMs are models that can be positioned in a certain role to deliver more accurate answers, such as Technology advisor or Copywriter. You will find many predefined ones to help you in your task, if you don't find it then you can edit it in your prompt. First understand the structure of how they work and create your own.
Writing Style
You can tell ChatGPT the style of writing you want in your response and it can be as varied as academic or thriller. Play with the styles and have fun.
Tone
Define the tone of the response text that ChatGPT will give you. It can be as varied as: funny or sarcastic, play around with them, mixing different parameters.
Sentiment
You can define the sentiment with which you want ChatGPT to write. This Hyperparameter can also be used in the Input, to analyze reviews, posts, emails, etc. Search in the Task option "Review Classification".
Format
It is important that we define the output format in which we want ChatGPT to deliver the content, this can save us work and processing time. The model contemplates many formats, try them. By default it will deliver a format according to the interpretation it makes.
Audience
One of the guidelines is to be specific, by defining the audience the model selects the mode, difficulty and language used in the response. Place your target audience for the model's response. Blank will use medium complexity and language.
Temperature
Is like a knob that controls how random or focused the generated text is. High temperature (e.g., 0.8) makes the output more creative and unpredictable. Low temperature (e.g., 0.2) makes the output more focused and deterministic. Adjusting temperature helps strike a balance between creativity and sticking to a specific pattern.
Top P
Narrows word choices based on probability, it limits the selection of words to the most probable ones. For example, if you set Top P to 0.8, the model will consider words that make up 80% of the total probability mass. Temperature adjusts how evenly distributed these probabilities are, influencing randomness. Both control text generation, but in different ways.
Top k
Limits the number of options considered during text generation. It keeps only the K most probable words in the selection process. For instance, if K is 50, the model will only choose from the 50 most likely words. In contrast, temperature adjusts the overall randomness of the output. Top P, on the other hand, keeps adding likely words until the total probability reaches a certain threshold (e.g., 80%). So, Top K limits options by quantity, temperature controls overall randomness, and Top P does so based on cumulative probability.
Max Tokens
Refers to the maximum number of tokens (1 word = 1.3 est. tokens, in English) that a language model can generate in a single response. It limits the total number of tokens in the combined input and output sequence. For example, if you set "max tokens" to 50, the model will generate a response that is up to 50 tokens in length, If the response exceeds this limit, it can be truncated or cut off accordingly.
Frequency Penalty
Is a setting used to control how often the model repeats certain words or phrases. It discourages the model from using the same word or phrase repeatedly in a response. For example, if you set a positive Frequency Penalty encourage the model to avoid repeating the same words or phrases. If you set a negative Frequency Penalty, on the other hand, encourages repetition. It helps make the generated text more diverse and natural-sounding by reducing excessive repetition.
Presence Penalty
Is a setting that influences how often the model includes words or entities in its generated text. A positive Presence Penalty encourages the model to reuse the words included in your prompt more frequently in the output. A negative Presence Penalty discourages the model from using them. For Example, Prompt: "Generate a story about an astronaut exploring a distant planet." {"presence_penalty": 0.8}. This will encourage the model to use the words astronaut, planet, or even story more often in the output.