Prompt Engineering: Write Better AI Prompts in 5 Minutes
Learn the fundamentals of prompt engineering. Get better results from ChatGPT, Claude, and other AI tools with these practical techniques.
The difference between a mediocre AI response and a great one often comes down to how you ask. Prompt engineering is the skill of writing instructions that get the results you want.
This guide covers the core techniques. You can apply them immediately.
Why Prompts Matter
AI models respond to what you write. Vague prompts get vague answers. Specific prompts get specific answers.
Example: Vague vs Specific
Vague prompt:
Write about productivity.
AI response: A generic 500-word essay about productivity tips.
Specific prompt:
Write 3 actionable tips for developers who work from home and struggle
with focus during afternoon hours. Keep each tip under 50 words.
AI response: Three targeted, concise tips for that exact situation.
Same AI, completely different results.
The Core Principles
1. Be Specific
Include details about:
- Format: How should the output look?
- Length: How long should it be?
- Audience: Who is this for?
- Tone: Formal, casual, technical?
- Constraints: What should it avoid?
Before:
Explain machine learning.
After:
Explain machine learning to a marketing manager with no technical
background. Use a real-world analogy. Keep it under 150 words.
2. Give Examples
Show the AI what you want. This is called "few-shot prompting."
Convert these product names to URL slugs:
Product: "Blue Running Shoes (Men's)"
Slug: blue-running-shoes-mens
Product: "Women's Leather Wallet - Brown"
Slug: womens-leather-wallet-brown
Product: "Kids' Backpack 2024 Edition"
Slug:
The AI sees the pattern and continues it correctly.
3. Assign a Role
Tell the AI who it should be.
You are a senior software engineer reviewing code for a junior developer.
Be constructive but direct. Point out issues and explain why they matter.
Review this code:
[code here]
Roles provide context that shapes the response style and expertise level.
4. Break Down Complex Tasks
Instead of one massive prompt, chain smaller ones.
Instead of:
Write a complete marketing strategy for a new SaaS product.
Do this:
Step 1: List 5 target customer segments for a project management SaaS.
Step 2: For segment 1, describe their main pain points.
Step 3: Write 3 marketing messages that address those pain points.
Each step builds on the previous one, giving you more control.
5. Specify the Output Format
Tell the AI exactly how to structure the response.
Analyze this customer feedback. Return your analysis as JSON:
{
"sentiment": "positive" | "negative" | "neutral",
"main_issues": ["issue 1", "issue 2"],
"suggested_actions": ["action 1", "action 2"],
"priority": "high" | "medium" | "low"
}
Feedback: [customer feedback here]
Structured output is easier to parse and use programmatically.
Common Prompt Patterns
The "Act As" Pattern
Act as a [role] with [years] of experience in [field].
Your task is to [specific task].
Example:
Act as a UX designer with 10 years of experience in e-commerce.
Review this checkout flow and identify 3 usability issues.
The Constraint Pattern
[Task description]
Constraints:
- Maximum 200 words
- Use bullet points
- Avoid jargon
- Include one example
Constraints prevent rambling and keep outputs focused.
The Persona Pattern
You are writing for [audience]. They care about [values].
They already know [context]. They want to learn [goal].
Example:
You are writing for startup founders. They care about speed and ROI.
They already know basic marketing. They want to learn growth tactics
that work with a small budget.
The Step-by-Step Pattern
Think through this step by step:
1. First, identify [X]
2. Then, analyze [Y]
3. Finally, recommend [Z]
Show your reasoning for each step.
Forces the AI to work through problems methodically.
What to Avoid
1. Ambiguous Instructions
Bad: "Make it better." Good: "Make it more concise. Remove filler words. Keep the key points."
2. Conflicting Requirements
Bad: "Write a comprehensive guide that's also very short." Good: "Write a 500-word overview covering the 3 most important concepts."
3. Missing Context
Bad: "Continue from where we left off." Good: "We were discussing X. The last point was Y. Continue with Z."
AI models have context limits. Don't assume they remember everything.
4. Overcomplicated Prompts
If your prompt is 500 words, you might be asking too much at once. Break it into steps.
Testing Your Prompts
Iterate Quickly
Run your prompt, check the output, adjust, repeat. Most good prompts take 2-3 iterations.
Test Edge Cases
Does your prompt work with unusual inputs? Test with edge cases to find weaknesses.
Document What Works
When you find a prompt that works well, save it. Build a library of tested prompts for repeated tasks.
Tools to Help
Prompt Optimizer
Use our Prompt Optimizer to improve your prompts automatically. Choose a style (clearer, more detailed, concise, or structured) and get an optimized version.
Token Counter
Check your prompt length with our Token Counter. Longer prompts cost more and may hit context limits.
Quick Reference
| Want This | Do This |
|---|---|
| Specific answer | Add constraints (format, length, audience) |
| Consistent format | Give examples |
| Expert-level response | Assign a role |
| Complex task done well | Break into steps |
| Structured data | Specify output format (JSON, table, list) |
Next Steps
Start with one technique. Apply it to a prompt you use regularly. See the difference.
As you get comfortable, combine techniques. A prompt with a role, constraints, and an example will outperform a basic request every time.
Try our Prompt Optimizer to see how your prompts can improve. Or explore our other AI tools to build your workflow.
Want to understand how prompt length affects your costs? Read our Token Counting guide. And for a bigger picture on combining AI tools, check How to Build Your Personal AI Stack.