Prompt Engineering Basics
The difference between a useless AI response and a brilliant one? Your prompt.
Prompt engineering is the skill of communicating effectively with AI models. It's not magic — it's structured communication. And it's the single most valuable AI skill you can learn.
What Makes a Good Prompt?
Four pillars of effective prompts:
1. Clarity — Be specific about what you want
Bad: "Help me with my code"
Good: "Fix the null pointer exception in this Python function
that processes user login data"
2. Context — Give the AI relevant background
Bad: "Write a welcome email"
Good: "Write a welcome email for new enterprise B2B customers
who just purchased our data analytics platform"
3. Constraints — Define boundaries
"Keep it under 200 words. Use a professional but friendly tone.
Include a call-to-action linking to our onboarding guide."
4. Output Format — Specify the shape of the response
"Return the result as a JSON object with fields: name, category, priority"
The Anatomy of a Prompt
Every effective prompt has three layers:
┌─────────────────────────────────┐
│ ROLE (who should the AI be?) │
│ "You are a senior Python │
│ developer at a fintech firm" │
├─────────────────────────────────┤
│ TASK (what should it do?) │
│ "Review this code for │
│ security vulnerabilities" │
├─────────────────────────────────┤
│ FORMAT (how should it respond?)│
│ "List each issue with severity │
│ (high/medium/low) and a fix" │
└─────────────────────────────────┘
You don't need all three for every prompt, but the more complex the task, the more structure helps.
Context is King
The single biggest improvement you can make to any prompt: add context.
Without context:
"How do I fix this error?"
The AI has no idea what error, what language, what framework, what you've already tried.
With context:
"I'm building a React 18 app with TypeScript. When I call
useEffectwith an async function, I get the warning 'useEffect must not return anything besides a function'. Here's my code: [code]. How do I fix this?"
Context includes:
- Technology stack and versions
- What you've already tried
- Error messages (exact text)
- Business requirements or constraints
- Your experience level (so the AI calibrates its explanation)
Specifying Output Format
Tell the AI exactly what shape you want:
Markdown table:
"Compare these three databases. Return a markdown table with columns: Name, Type, Best For, Max Scale, Price"
JSON:
"Extract the key entities from this text. Return valid JSON:
{ entities: [{ name, type, relevance }] }"
Structured list:
"Give me 5 action items. For each, include: what to do, who owns it, deadline, and priority (P1/P2/P3)"
Code:
"Write the function in TypeScript with JSDoc comments. Include error handling and a usage example."
The more precisely you define the output, the less time you spend reformatting.
The Iteration Loop
Great prompts rarely come on the first try. Use this loop:
1. Write initial prompt
2. Review the response
3. Identify what's wrong or missing
4. Refine the prompt (add constraints, context, or examples)
5. Repeat until satisfied
Common refinements:
- "Be more specific about X"
- "Don't include Y"
- "Format the output as Z"
- "The tone should be more formal"
- "Add error handling for edge cases"
Pro tip: Save your best prompts! Build a personal library of prompts that work well for your recurring tasks.
Prompt Length vs. Quality
A common misconception: shorter prompts are better because they use fewer tokens.
Reality: A well-structured prompt that's 200 tokens long will save you from 3-4 follow-up exchanges of 500+ tokens each.
| Approach | Tokens Used | Quality |
|---|---|---|
| Vague prompt + 4 follow-ups | ~3,000 | Medium |
| Detailed prompt, one shot | ~800 | High |
| Template with examples | ~1,200 | Very High |
The ROI of a good prompt:
- Fewer iterations = less cost
- Better first response = less wasted time
- Reusable templates = compounding returns
---quiz question: What are the four pillars of an effective prompt? options:
- { text: "Speed, accuracy, length, complexity", correct: false }
- { text: "Clarity, context, constraints, output format", correct: true }
- { text: "Role, temperature, tokens, model", correct: false }
- { text: "Input, processing, output, feedback", correct: false } feedback: The four pillars are Clarity (be specific), Context (give background), Constraints (set boundaries), and Output Format (define the shape of the response).
---quiz question: Why is a longer, more detailed prompt often more cost-effective than a short one? options:
- { text: "Longer prompts are always cheaper per token", correct: false }
- { text: "A detailed prompt gets a good response in one shot, avoiding multiple expensive follow-ups", correct: true }
- { text: "AI models work better with more tokens", correct: false } feedback: A well-crafted detailed prompt (200 tokens) typically gets the right answer on the first try, saving you from 3-4 follow-up exchanges that would total 3,000+ tokens.
---quiz question: What is the most impactful improvement you can make to a prompt? options:
- { text: "Making it shorter", correct: false }
- { text: "Using technical jargon", correct: false }
- { text: "Adding relevant context", correct: true }
- { text: "Adding emojis for clarity", correct: false } feedback: Context is the single biggest lever. Telling the AI about your tech stack, what you've tried, exact error messages, and your constraints dramatically improves response quality.