Prompting That Actually Works
Why 'write me a marketing email' gets garbage — and the 4-part formula that gets gold.
Same tool, wildly different results
Jake and Sana both work on the same marketing team. Both use Claude. Both need to write a product launch email for the same feature. Jake types this:
"Write me a marketing email."
He gets a generic, bland, 400-word email about "an exciting new product" that could be about literally anything. It reads like spam. He spends 45 minutes rewriting it from scratch and mutters, "AI is overhyped."
Sana types this:
"You are a senior product marketer at a B2B SaaS company that sells project management software to mid-size engineering teams. Write a launch email for our new Gantt chart feature. The audience is existing customers who currently use our kanban boards. Tone: professional but warm, like a helpful colleague — not salesy. Format: subject line, 3 short paragraphs (max 50 words each), and a single CTA button labeled 'Try Gantt Charts.' Avoid exclamation marks and buzzwords like 'revolutionary' or 'game-changing.'"
She gets a polished, on-brand email in 20 seconds. She edits two sentences, adds a customer quote, and sends it. Total time: 4 minutes.
Same tool. Same AI model. Same task. The difference? The prompt.
Prompting is not about finding magic words. It's about being specific. And there's a formula for it.
Why vague prompts fail (the restaurant analogy)
Imagine walking into a restaurant and telling the waiter: "Bring me food."
What happens? The waiter stares at you. Do you want breakfast or dinner? Steak or sushi? Are you allergic to anything? Do you want it fast or slow? Hot or cold?
The waiter might bring you something — maybe a plain chicken breast with rice — because that's the safest, most generic option. It's technically food. But it's not what you wanted.
That's exactly what happens when you type "write me a marketing email" into an AI tool. The AI doesn't know:
- Who you are or who you're writing for
- What the email is about
- How it should sound
- What to avoid
So it gives you the AI equivalent of a plain chicken breast: generic, safe, and completely uninspiring.
Now imagine telling the waiter: "I'd like a medium-rare ribeye, no sauce, with roasted vegetables, and could you bring the salad first? I'm allergic to shellfish."
That's a great prompt. Clear role (dinner guest), clear context (allergies), clear task (specific meal), clear format (salad first, then main course). The waiter knows exactly what to do.
There Are No Dumb Questions
"So I need to write a paragraph-long prompt every time?"
Not every time. For simple tasks ("translate this sentence to French"), a short prompt works fine. But for anything creative, analytical, or business-critical — emails, reports, strategies, analysis — the extra 30 seconds you spend on your prompt saves you 30 minutes of rewriting the output.
"What if I don't know how to describe what I want?"
Start with what you DON'T want. "Don't use jargon." "Don't exceed 200 words." "Don't use bullet points." Constraints are often easier to articulate than positive instructions, and they're surprisingly effective at shaping the output.
The 4-part formula: Role + Context + Task + Format
Every great prompt has four parts. Miss one, and the output gets worse. Miss two, and you're back to plain chicken breast.
| Part | What it does | Example |
|---|---|---|
| Role | Tells the AI who to be | "You are a senior financial analyst at a Fortune 500 company" |
| Context | Gives background the AI needs | "Our Q3 revenue dropped 12% due to supply chain disruptions" |
| Task | States exactly what you want | "Write a 1-page executive summary of Q3 performance for the board" |
| Format | Specifies the shape of the output | "Use 3 sections: highlights, challenges, next steps. Each section has 2-3 bullet points. No jargon." |
Here's how the formula flows:
Why this order matters: Role sets the voice. Context gives the AI the information it needs to be accurate. Task tells it what to build. Format tells it how to shape it. Skip straight to "task" without role and context, and the AI has to guess everything else.
<span className="text-xs font-black uppercase tracking-widest text-purple-600">Role</span>
<p className="mt-1 text-sm">"You are an expert marketing strategist with 15 years of B2B experience."</p>
<span className="text-xs font-black uppercase tracking-widest text-blue-600">Context</span>
<p className="mt-1 text-sm">"Our target audience is CFOs at mid-market SaaS companies (200-1000 employees)."</p>
<span className="text-xs font-black uppercase tracking-widest text-green-600">Task</span>
<p className="mt-1 text-sm">"Write three subject lines for a cold email promoting our spend analytics tool."</p>
<span className="text-xs font-black uppercase tracking-widest text-amber-600">Format</span>
<p className="mt-1 text-sm">"Each subject line should be under 50 characters. Output as a numbered list."</p>
Diagnose the Bad Prompt
25 XPShow, don't tell (examples beat instructions)
Here's a secret that separates good prompters from great ones: one example does more work than a paragraph of instructions.
Watch this. Instead of writing:
"Write the summary in a professional tone, using active voice, keeping sentences short, and starting each bullet point with an action verb"
Just show it:
"Write the summary like this example:
- Reduced customer churn by 8% through targeted retention campaigns
- Launched three new product features ahead of schedule
- Identified $2.3M in cost savings through vendor consolidation"
The AI instantly understands the pattern: action verb, specific metric, brief explanation. You didn't need to explain active voice, short sentences, or action verbs. You just showed it.
The rule: If your instructions about format or tone are longer than 2 sentences, replace them with one example. The example communicates faster, more accurately, and with less ambiguity.
There Are No Dumb Questions
"What if I can't think of a good example?"
Ask the AI to generate one. Seriously. Say: "Give me 3 examples of a great executive summary bullet point." Pick the one you like best, then use it as the example in your real prompt: "Write all bullet points in this style: [paste example]."
"How many examples should I give?"
One is usually enough. Two examples confirm the pattern. Three examples lock it in. More than three is almost never necessary — and it wastes tokens. Think of it like training a smart intern: you show them one good email, maybe two, and they get it.
Iterating: your first prompt is a rough draft
Here's the mindset shift that changes everything: your first prompt is never your final prompt. Treat it like a rough draft.
Most people type a prompt, get a mediocre result, and conclude that AI doesn't work. The pros do this instead:
Real example of iteration:
| Attempt | Prompt adjustment | What improved |
|---|---|---|
| 1st | Original 4-part prompt | Got the structure right, but tone was too formal |
| 2nd | Added: "Write like you're emailing a colleague, not a client" | Tone improved, but it included buzzwords |
| 3rd | Added: "Avoid words like 'synergy,' 'leverage,' and 'ecosystem'" | Nailed it |
Three attempts. Total time: 2 minutes. Compare that to writing from scratch: 30+ minutes.
Rewrite 3 Bad Prompts Into Good Ones
50 XPThe cheat sheet: common prompt upgrades
When your output isn't right, here are the fastest fixes:
| Problem | Add this to your prompt |
|---|---|
| Too formal | "Write like you're talking to a colleague over coffee" |
| Too long | "Keep it under [X] words" or "Maximum 3 bullet points" |
| Too vague | Add a specific example of what good looks like |
| Wrong audience | "The reader is [specific person]. They know [X] but don't know [Y]" |
| Too many buzzwords | "Avoid jargon. Use plain English a high schooler would understand" |
| Missing key info | "Make sure to include [specific data point or topic]" |
| Wrong structure | "Use this structure: [heading 1], [heading 2], [heading 3]" |
| Sounds like AI | "Don't use phrases like 'delve into,' 'it's important to note,' or 'in conclusion'" |
That last one is worth memorizing. AI has favorite phrases — "delve," "crucial," "landscape," "it's worth noting" — and adding a simple "avoid AI-sounding phrases" instruction makes the output dramatically more natural.
Fix the Output
25 XPThe Iteration Challenge
25 XPBack to Jake
The next day, Jake sat down and tried again. Same AI, same product launch email — but this time he wrote the role, added context, defined the format, and included one example of the tone he wanted.
The first output was 80% of the way there. He iterated once. The second output he could send.
Total time: four minutes.
He's not "good at AI." He learned one formula and applied it once. That's the whole thing.
Key takeaways
- Vague prompts get vague results. "Write me an email" is like telling a waiter "bring me food." You'll get something, but it won't be what you wanted.
- Use the 4-part formula: Role + Context + Task + Format. Every part you skip makes the output worse.
- One example beats a paragraph of instructions. Show the AI what good looks like instead of describing it.
- Your first prompt is a rough draft. Iterate 2-3 times. Each iteration takes 15 seconds and dramatically improves the result.
- Constraints are powerful. "Don't use jargon" and "keep it under 100 words" shape the output more than positive instructions.
Knowledge Check
1.A colleague complains that ChatGPT always gives them 'generic, useless answers.' You look at their most recent prompt: 'Write me a marketing email.' What is the most likely reason for the poor output?
2.You're writing a prompt for a quarterly business review summary. Which of these is the strongest prompt?
3.Your prompt's output has the right content but wrong tone — it sounds like a legal document when you wanted a friendly internal update. What is the fastest way to fix this?
4.When should you include an example (few-shot) in your prompt instead of describing the format in words?