Types of prompts

A zero-shot prompt is the simplest type of prompt, and you`ve probably used it a lot of times. It is just a direct instruction to perform a task, without any examples or additional conditions. A question, an instruction, the start of a story - all of these inputs are zero-shot prompts. They can be useful for well-known or simple tasks.
A one-shot prompt provides a single example. The idea is that the model receives minimal guidance. It is useful for simple tasks that need slight clarification.
A few-shot prompt and the idea behind it are similar to a one-shot prompt, but multiple examples are provided. This significantly increases the chance that the model will follow the pattern. You can provide as many examples as you want (just remember the input length limitations), but three to five are usually enough.
System prompt. This type of prompt sets the general context for the LLM. It defines an overall picture of what the model should do, like classifying a text or translating a language. It defines the fundamental purpose of a model`s response.
Contextual prompt. Such a prompt is highly specific to the current task, it provides specific details and background data for a particular conversation, rather than general background information.
Role prompting. This kind of prompt provides a specific identity for the model to adopt. Because of this, the LLM`s responses are consistent with the assigned role. You can even set the voice type and style for audio output. Such a prompt adds a kind of personality to your model.
Step-back prompting. A model’s answers may not always be obvious. But you can ask your LLM to describe each step separately. This “step-back” process activates the model`s background knowledge and reasoning before generating a final answer. This kind of prompt can even turn a wrong answer into a correct one, which makes this approach extremely useful.
Chain of Thought (CoT) prompting helps enhance the reasoning capacity of your LLM by providing intermediate logical steps. With this type of prompt, the LLM produces more accurate replies. Combining it with few-shot (or at least one-shot) techniques can yield you better answers on more sophisticated tasks, than simply relying on a zero-shot chain of thought.
Tree of Thoughts (ToT) prompting, as you may guess, is an expanded version of the CoT prompting technique because it allows the LLM to provide several reasoning paths at the same time, rather than relying on one chain of thought. It is suitable for cases that demand more extensive reasoning. In each chain of thought, every step represents an intermediate stage in the problem-solving process.
Reason and act (ReAct) prompting is an approach that allows LLMs to handle sophisticated tasks by combining natural language reasoning with third-party tools (such as code interpreters, search engines, etc.) This paradigm enables an LLM to perform actions (such as using external APIs), which represent a first step towards agent-like behavior. ReAct mimics human behaviour and interaction in the real world (e.g., communicating, taking actions to gather information, etc.). Because of this, ReAct is a strong choice compared to other prompting techniques across a wide range of domains. ReAct works as a combination of reasoning and acting in a thought-action loop. The LLM thinks about the problem, creates a plan of action, executes it, and observes the result. These observations are then used to refine its reasoning and generate a new plan of action. This loop continues until the LLM determines it has found a solution to the problem.

Back