17 Proven Prompt Engineering Techniques to Master LLM Interactions
This article presents 17 practical prompt‑engineering strategies—ranging from zero‑shot and few‑shot prompting to role, style, and chain‑of‑thought methods—explaining their usage, ideal scenarios, and concrete examples to help you obtain higher‑quality responses from large language models.
Prompt engineering is a set of techniques for directly interacting with large language models (LLMs) such as ChatGPT, enabling more accurate, consistent, creative, and functional outputs tailored to specific needs.
1. Zero‑Shot Prompting
Usage: Issue a clear instruction without providing examples. Applicable scenarios: Simple, direct tasks like translation or factual queries. Example: Translate the English phrase “Flowers on the road” into Spanish.
2. One‑Shot Prompting
Usage: Give a clear instruction and include one example that shows the desired task or output format. Applicable scenarios: When a single example can clearly illustrate the task or output format. Example: Provide the Spanish translation of the English word “basket” in uppercase letters. Example: English word (input): River → Spanish translation (output): RÍO.
3. Few‑Shot Prompting
Usage: Provide a clear instruction together with several examples. Applicable scenarios: When you want the model to adapt to a specific task or domain without fine‑tuning, expecting more stable and accurate output. Example: Determine the sentiment of the sentence “The lecture was quite boring”, output only “positive”, “negative”, or “neutral”. Example: “This movie was great!” → positive; “I hated the service.” → negative; “I don’t know how I feel about it.” → neutral.
4. Role Prompting
Usage: Give a clear instruction and assign a specific role to the model. Applicable scenarios: Open‑ended tasks where the output should follow a particular perspective, personality, or tone. Example: Write a 500‑word short essay with four points about college life. Role: Play a sweet college girl who loves using Gen‑Z slang.
5. Style Prompting
Usage: Explicitly specify the desired style, tone, or genre in the prompt. Applicable scenarios: When the output needs to match a specific style or tone. Example: Write a formal email requesting a salary raise.
6. Emotion Prompting
Usage: Add emotionally‑charged statements or phrases to the prompt. Applicable scenarios: Creative text generation tasks such as storytelling or poetry. Example: Write a poem about an imaginary friend who never gives up, expressing longing.
7. Contextual Prompting
Usage: Provide background information or custom content before giving a clear instruction. Applicable scenarios: When background or domain details are needed for more accurate or relevant responses, e.g., in RAG chatbots. Context: I am Jennifer Luke, marketing manager at JL company. Example: Write a team email about an upcoming marketing campaign.
8. Rephrase and Respond (RaR)
Usage: Have the LLM first restate the question as a better prompt, then generate the final answer. Applicable scenarios: Complex tasks requiring higher accuracy or assessing the model’s understanding. Example: Restate and expand the question, then answer: What is the difference between relevance and causality?
9. Re‑reading (RE2)
Usage: Start with an instruction or question, then append “Read again:” and repeat the original instruction/question. Applicable scenarios: Complex reasoning tasks. Example: A farmer has a rectangular field whose length is three times its width. The perimeter is 400 m. What are the field’s dimensions? Read again: “A farmer has a rectangular field whose length is three times its width. The perimeter is 400 m. What are the field’s dimensions?”
10. System Prompting
Usage: Provide high‑level instructions or context that the LLM will consider throughout the interaction. In ChatGPT this can be done via “custom GPT”; in LLM applications it is set as a system prompt. Applicable scenarios: When you need to set the overall behavior and tone of the LLM in a conversational setting. Example: You are an assistant that provides concise factual answers.
11. Self‑Ask
Usage: Ask the LLM to break the problem into smaller sub‑questions, answer all sub‑questions, then produce the final answer. Applicable scenarios: Complex, multi‑step reasoning tasks. Example: Should I pursue a master’s in data science? Break the question into smaller sub‑questions, answer them, and give a final recommendation based on your reasoning.
12. Chain‑of‑Thought (CoT)
Usage: Add “let’s think step by step” when prompting the model. Applicable scenarios: Tasks that require reasoning, such as math or logic problems. Example: After a 10 % discount and a 7 % tax, what is the total cost? Let’s think step by step.
13. Step‑back Prompting
Usage: Pose a broad question first, then based on the model’s answer, prompt it to answer a specific question. Applicable scenarios: When analysis or decision‑making depends on multiple (broader) factors. Example: Explain key factors influencing a company’s decision to enter a new market. Based on that, should a tech company expand into Europe?
14. Self‑Consistency
Usage: After asking or instructing, have the LLM generate multiple outputs and return the most frequent answer. Applicable scenarios: When multiple possible answers exist and consistency and accuracy are needed. Example: Which programming language is best for machine learning? Generate five possible answers and return the most frequent one.
15. Thread‑of‑Thought (ThoT)
Usage: Similar to CoT but instead of “let’s think step by step”, say “guide me step by step”. Applicable scenarios: Complex Q&A with rich background, e.g., in RAG systems. Context: I have a party attendance problem: 10 guests each prefer one of three music types (jazz, rock, classical). A guest attends only if their preferred music is played, but only one music type can be played at a time and at most three types can be played. Example: Guide me step by step to determine the maximum number of guests that can attend the party.
16. Tree‑of‑Thought (ToT)
Usage: Have the model decompose a complex problem into smaller steps; at each step generate multiple possible solutions, evaluate them, and continue with the best option until a final solution is reached. Applicable scenarios: Deep reasoning, multi‑step planning with high accuracy requirements. Example: Design a new coffee cup that keeps drinks hot longer. Break the problem into steps, generate solutions, evaluate feasibility, cost, and impact, and iteratively select the best until a final design is produced.
17. ReAct (Reason and Act)
Usage: Instruct the LLM to generate an idea, act on it, observe the result, and use the observation to refine subsequent ideas and actions. Applicable scenarios: Tasks that need iterative decision‑making and interaction with external systems or data. Example: Research the latest market trends for electric vehicles. First generate relevant search keywords, call a search API, observe results, refine keywords, repeat until the most up‑to‑date trend data is found.
In the field of prompt engineering there is no universal formula that works for every situation; each model has its own characteristics, and the best way to obtain ideal results is through continual experimentation, adjusting instructions, adding context, and often combining multiple techniques.
Code Mala Tang
Read source code together, write articles together, and enjoy spicy hot pot together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.