Advanced Prompting Techniques
From basic questions to sophisticated reasoning chains — the techniques that unlock AI's full potential.
Zero-Shot Prompting
The simplest approach: just ask, with no examples.
Classify the following customer message as
"complaint", "question", or "praise":
"Your product crashed three times today and
I lost all my work. This is unacceptable."
→ complaint
When it works: Simple, well-defined tasks where the AI's training data already covers the pattern.
When it fails: Ambiguous tasks, domain-specific formats, or when the expected output format is unusual.
Zero-shot is your baseline. If it works, great — don't over-engineer. If it doesn't, reach for few-shot.
Few-Shot Prompting
Provide examples to teach the AI your expected pattern:
Classify these support tickets by priority:
Example 1: "Can't log in to my account" → P1-Critical
Example 2: "How do I change my password?" → P3-Low
Example 3: "Website is completely down" → P0-Emergency
Example 4: "Can you add dark mode?" → P4-Feature
Now classify: "Payment processing is failing for all customers"
→ P0-Emergency
Why it works: Examples are worth a thousand words of instruction. The AI pattern-matches against your examples to understand:
- Output format
- Classification criteria
- Edge cases
- Tone and style
Pro tip: Use 3-5 diverse examples that cover different categories. Include at least one edge case.
Chain-of-Thought (CoT)
Force the AI to show its reasoning — dramatically improves accuracy on complex tasks:
Without CoT:
"If a shirt costs $25 and is 20% off, and tax is 8%, what's the final price?" AI: "$21.60" (sometimes wrong)
With CoT:
"Solve this step by step: If a shirt costs $25 and is 20% off, and tax is 8%, what's the final price?"
AI: "Step 1: 20% of $25 = $5 discount Step 2: $25 - $5 = $20 after discount Step 3: 8% tax on $20 = $1.60 Step 4: $20 + $1.60 = $21.60"
The magic phrase: "Let's think step by step" or "Reason through this step by step."
CoT works because it forces the model to allocate tokens to intermediate reasoning instead of jumping to conclusions.
Role Prompting
Assign a persona to dramatically change response quality and style:
You are a senior security engineer with 15 years of experience
in enterprise application security. You specialize in OWASP
Top 10 vulnerabilities and have conducted hundreds of code reviews.
Review this authentication code and identify security issues.
Be thorough and cite specific vulnerability categories.
Effective roles:
- "You are a senior [X] engineer at a Fortune 500 company"
- "You are a patient teacher explaining to a beginner"
- "You are a devil's advocate — find every flaw in this plan"
- "You are a technical writer who values clarity above all"
Why it works: The role activates relevant patterns in the model's training data. A "security engineer" role surfaces security-related knowledge that a generic prompt wouldn't access.
Tree of Thought
For truly complex problems, explore multiple reasoning paths:
I need to redesign our database schema for a multi-tenant SaaS app.
Consider three different approaches:
1. Shared database, shared schema (with tenant_id column)
2. Shared database, separate schemas
3. Separate databases per tenant
For each approach, analyze:
- Query performance at 10K tenants
- Data isolation and security
- Operational complexity
- Cost at scale
Then recommend the best approach for our case:
500 tenants, healthcare data (HIPAA), moderate query volume.
Tree of Thought works by:
- Generating multiple solution paths
- Evaluating each path independently
- Selecting the best path based on criteria
This mirrors how experts actually solve problems — considering multiple options before committing.
Combining Techniques
The real power comes from stacking techniques:
You are a senior data engineer specializing in ETL pipelines. [ROLE]
I have a CSV file with 50M rows of customer transactions.
I need to deduplicate records where the email matches but
the name has slight variations (typos, abbreviations). [CONTEXT]
Here are examples of duplicates: [FEW-SHOT]
- "Jon Smith" / "Jonathan Smith" / "Jon Smth" → same person
- "Sarah Connor" / "Sara Conner" → same person
- "Sarah Connor" / "Sarah Williams" → different people
Think through the best deduplication strategy step by step. [COT]
Consider at least two approaches before recommending one. [TREE]
Output: A Python script using pandas with comments
explaining each step. [FORMAT]
This single prompt uses five techniques together — and will produce dramatically better results than "How do I deduplicate a CSV?"
Technique Selection Guide
Which technique for which task?
| Task Type | Best Technique | Example |
|---|---|---|
| Simple classification | Zero-shot | Spam detection |
| Custom format needed | Few-shot | Ticket categorization |
| Math or logic | Chain-of-thought | Calculations, debugging |
| Creative writing | Role prompting | Marketing copy |
| Architecture decisions | Tree of thought | System design |
| Complex analysis | Combined | Code review, data analysis |
Rule of thumb: Start with the simplest technique that works. Add complexity only when the output quality isn't sufficient.
---quiz question: What is the key difference between zero-shot and few-shot prompting? options:
- { text: "Zero-shot is faster, few-shot is slower", correct: false }
- { text: "Zero-shot provides no examples, few-shot includes examples to demonstrate the pattern", correct: true }
- { text: "Zero-shot uses GPT-3, few-shot uses GPT-4", correct: false }
- { text: "Zero-shot is free, few-shot costs extra", correct: false } feedback: Zero-shot means asking directly with no examples. Few-shot provides 3-5 examples so the AI can pattern-match your expected format, criteria, and style.
---quiz question: Why does Chain-of-Thought improve accuracy on complex tasks? options:
- { text: "It uses a more powerful model behind the scenes", correct: false }
- { text: "It forces the model to allocate tokens to intermediate reasoning steps instead of jumping to conclusions", correct: true }
- { text: "It accesses the internet for verification", correct: false } feedback: Chain-of-Thought works by making the model "show its work." By generating intermediate reasoning tokens, the model computes step-by-step rather than guessing the final answer directly.
---quiz question: When should you use Tree of Thought prompting? options:
- { text: "For simple yes/no questions", correct: false }
- { text: "For complex decisions where multiple approaches should be explored and compared", correct: true }
- { text: "Only for creative writing tasks", correct: false } feedback: Tree of Thought is ideal for complex problems with multiple viable solutions — architecture decisions, strategy planning, and design choices where you want to evaluate several paths before committing.