Advanced ChatGPT Techniques
Power user techniques: chain-of-thought reasoning, few-shot learning, system prompts, memory and context management, multi-turn strategies, and ChatGPT plugins.
The consultant who charges $500/hour for prompting
A strategy consultant named Elena bills $500/hour. A third of her billable work is now done through ChatGPT — competitive analyses, market sizing, financial modeling assumptions, client presentation drafts. Her clients don't know and don't care. They pay for the quality of her thinking, not the tools she uses.
But here's the thing: Elena's ChatGPT outputs are dramatically better than what most people get from the same tool. She writes prompts that would fill half a page. She chains conversations across 15-20 messages. She uses techniques that most users have never heard of. The gap between a casual user and a power user isn't access to a better model — it's knowing how to squeeze 10x more value from the same one.
This module teaches you Elena's techniques.
Advanced chain-of-thought techniques
You learned basic chain-of-thought in Module 2 ("think step by step"). Here's how power users take it further.
Self-consistency prompting:
Instead of asking for one chain of reasoning, ask for three — then pick the answer that appears most often.
"I need to estimate the total addressable market for AI-powered legal document review in the US. Work through this three different ways: (1) top-down from total legal spending, (2) bottom-up from number of law firms and average document volume, (3) by analogy with a similar market that has already been disrupted by AI. Show your reasoning for each approach. Then compare the three estimates and give me your best single number with a confidence range."
Why this works: a single chain of reasoning can go wrong at any step. Three independent approaches that converge on a similar number give you much higher confidence.
Structured reasoning with constraints:
"Evaluate whether we should expand into the European market. Use this framework: (1) List 5 arguments FOR expansion, rated 1-10 on strength. (2) List 5 arguments AGAINST, rated 1-10. (3) Identify the single biggest unknown that would change your recommendation. (4) Give a final recommendation with explicit assumptions. Think step by step through each argument before rating it."
✗ Without AI
- ✗Think step by step.
- ✗Show your reasoning.
- ✗Walk me through the logic.
✓ With AI
- ✓Work through this three independent ways and compare results.
- ✓Rate each argument 1-10 on strength before reaching a conclusion.
- ✓Identify your assumptions explicitly — which ones, if wrong, would reverse your conclusion?
- ✓Play devil's advocate against your own reasoning before finalizing.
Few-shot learning: advanced patterns
Basic few-shot: you show 2-3 examples. Advanced few-shot: you carefully design examples that teach nuanced behavior.
Teaching tone through examples:
"I want you to write product descriptions in our brand voice. Here are three examples that capture it perfectly:
Product: Wireless earbuds Our style: 'Twelve hours of battery. Zero hours of fiddling with Bluetooth. They connect when you open the case. They disconnect when you close it. That's it.'
Product: Standing desk Our style: 'Your back has been asking you to stand up for years. This desk finally makes it easy. Electric motor. Four memory presets. Moves in three seconds. No wobble at any height.'
Product: Laptop sleeve Our style: 'It fits a 14-inch laptop. It has one zipper pocket for your charger. It doesn't have seventeen compartments you'll never use. Simple, padded, done.'
Now write one for: Noise-canceling headphones. Match the voice exactly."
Notice: the examples teach sentence length (short), punctuation style (periods, not exclamation marks), attitude (opinionated, minimal), and information density (specific numbers, no fluff). Three examples communicate all of this more effectively than any description could.
Teaching classification with edge cases:
"Classify customer feedback into: Bug, Feature Request, Praise, or Complaint. Here are examples including edge cases:
'The app crashes when I upload photos over 5MB' → Bug 'Would be great if you added dark mode' → Feature Request 'I love how fast the search works!' → Praise 'Your support team took 3 days to respond' → Complaint 'The search is fast but sometimes returns wrong results' → Bug (functionality issue takes priority over embedded praise) 'Can you fix the login issue? Also, dark mode would be nice' → Bug (primary issue; note secondary Feature Request)
Now classify these 10 items: [paste items]"
The edge cases in lines 5-6 are the key. They teach the model how to handle ambiguity — something that three clean examples would never cover.
Design a few-shot classifier
25 XPSystem prompts and Custom Instructions
Every ChatGPT conversation has an invisible layer: the system prompt. This is the set of instructions that shapes the AI's behavior before you say anything. In the ChatGPT interface, you control this through Custom Instructions.
Custom Instructions (Settings → Personalization → Custom Instructions):
The two fields are:
- "What would you like ChatGPT to know about you?" — Your role, industry, expertise level, goals
- "How would you like ChatGPT to respond?" — Format, tone, length, things to include/avoid
Power user Custom Instructions example:
About me: "I'm a product director at a B2B SaaS company (50 employees, Series B, $12M ARR). I manage a team of 3 PMs. I present to the executive team weekly. I'm data-driven and prefer specificity over generality."
Response style: "Be direct and concise. Use bullet points over paragraphs. Include specific metrics or examples when possible. Challenge my assumptions — don't just agree with me. If I ask for something vague, ask me a clarifying question instead of guessing. Never use phrases like 'great question' or 'absolutely.' Format tables in markdown."
These instructions apply to EVERY conversation. You set them once and every interaction becomes more relevant.
There Are No Dumb Questions
What's the difference between Custom Instructions and a system prompt?
Custom Instructions are the consumer version of system prompts. If you use the ChatGPT API, you can set an explicit system prompt at the start of each conversation. In the ChatGPT app, Custom Instructions serve the same purpose — they're injected as context before your first message. Developers building on the API have more granular control over system prompts.
Do Custom Instructions use up my context window?
Yes. They're included in every message, so very long Custom Instructions reduce the space available for your conversation. Keep them under 300 words. Focus on the information that applies to MOST of your conversations, not edge cases.
Memory and context management
ChatGPT has a limited context window — the amount of text it can "see" at once. Managing this context is what separates power users from everyone else.
The context window is like a whiteboard. Everything you write stays visible — but once the whiteboard is full, the oldest content gets erased. In long conversations, ChatGPT literally forgets your early messages.
ChatGPT Memory (when enabled) lets the AI remember facts across conversations. It stores specific things you tell it: your name, your role, your preferences. Think of it as a sticky note on the whiteboard that never gets erased.
Context stuffing is the technique of putting essential information at the START of your prompt, where it gets the most attention. The model pays more attention to the beginning and end of its context window than the middle.
Summarization checkpoints keep long conversations productive. Every 10-15 messages, say: "Summarize everything we've discussed so far in 5 bullet points. I'll paste this summary into a new conversation if we need to continue."
The context refresh technique:
When a conversation is getting long and ChatGPT starts forgetting earlier context:
"Before we continue, let me restate the key context:
- We're building a pricing page for [product]
- Target audience: SMB owners, non-technical
- Tone: confident, simple, no jargon
- We decided on 3 tiers: Starter ($29), Pro ($79), Enterprise (custom)
- We've drafted the Starter and Pro descriptions
Now let's write the Enterprise tier description."
This "re-grounding" prompt takes 30 seconds and prevents the drift that ruins long conversations.
Multi-turn conversation strategies
A single prompt gets a single answer. A multi-turn conversation builds something.
The iterative refinement pattern:
| Turn | What you say | What happens |
|---|---|---|
| 1 | "Draft a value proposition for [product] aimed at [audience]" | You get a first draft |
| 2 | "The second sentence is too vague. Make it specific — use a number or a concrete outcome" | One targeted improvement |
| 3 | "Good. Now write 3 variations — one emphasizing speed, one emphasizing savings, one emphasizing quality" | Three alternatives to choose from |
| 4 | "I like variation 2 but with the opening from variation 1. Combine them." | Custom hybrid |
| 5 | "Cut this to under 25 words. Every word must earn its place." | Final polish |
Five turns. Two minutes. A value proposition that's tighter than what most teams produce in a brainstorming session.
The persona-switching technique:
"You are a harsh literary critic. Tear apart this blog post — what's weak, what's cliched, what would you cut?"
Then:
"Now switch roles. You are an encouraging writing coach. What's working well in this post? What are its strengths?"
Then:
"Now you're an SEO specialist. How would you optimize this post for search without sacrificing quality?"
Three perspectives on the same piece. Each catches things the others miss.
Multi-turn mastery
50 XPChatGPT plugins and tools
ChatGPT Plus users have access to plugins and tools that extend the AI's capabilities beyond text generation.
| Tool | What it does | Best for |
|---|---|---|
| Browsing | Searches the web in real time | Current events, price checks, recent data |
| Code Interpreter | Runs Python code, reads files | Data analysis, charts, file conversion |
| DALL-E | Generates images from text | Concepts, illustrations, presentations |
| Custom GPTs | Pre-configured ChatGPT with specific instructions | Repeated workflows (covered in Module 8) |
Combining tools in one conversation:
"Search the web for the latest quarterly revenue figures for Shopify, Stripe, and Square. Then use Code Interpreter to create a bar chart comparing them. Include year-over-year growth rate as a second axis."
This single prompt uses browsing (to find current data) and Code Interpreter (to analyze and visualize it). The key is that these tools work together within a conversation.
There Are No Dumb Questions
Do I need ChatGPT Plus for all of these techniques?
No. Chain-of-thought, few-shot learning, multi-turn strategies, and context management work on the free tier. Browsing, Code Interpreter, DALL-E, and Custom GPTs require Plus ($20/month) or Team ($25/user/month). Custom Instructions are available to all users.
Are these techniques specific to ChatGPT?
Most of them work with any large language model — Claude, Gemini, Copilot. Chain-of-thought, few-shot learning, system prompts, and context management are universal prompting techniques. The specific tools (Code Interpreter, plugins) are ChatGPT-specific, but competing models have equivalents.
The meta-prompt: asking ChatGPT to improve your prompt
The most advanced technique: use ChatGPT to make your prompts better.
"I want to write a prompt that gets ChatGPT to create a detailed competitive analysis. Here's my current prompt: [paste your prompt]. Improve this prompt. Make it more specific, add constraints that will improve output quality, and suggest any context I should include. Explain your changes."
Or even simpler:
"I want to accomplish [goal]. Write the optimal prompt I should use to get the best possible result from ChatGPT. Include role assignment, context, format specifications, and any techniques (few-shot, chain-of-thought) that would improve the output."
Temperature and creativity control
You can't set temperature directly in the ChatGPT interface, but you can simulate it with language:
| What you want | What to say |
|---|---|
| Predictable, factual | "Give me the most likely, conventional answer. Stick to established facts." |
| Slightly creative | "Be creative but grounded. Suggest ideas that are innovative but realistic." |
| Highly creative | "Be bold and unconventional. I want surprising ideas, even if some are risky. Quantity over quality — I'll filter later." |
| Maximum divergence | "Give me the weirdest, most unexpected ideas you can think of. Break conventions. I want ideas nobody else would suggest." |
For API users, temperature 0.0-0.3 gives deterministic, factual output. Temperature 0.7-1.0 gives creative, varied output. Temperature 1.5+ gets chaotic and usually unhelpful.
Back to Elena
Elena's prompts are long, but they're not complicated. She assigns specific roles, provides detailed context, uses few-shot examples to lock in tone, manages context across long conversations, and iterates relentlessly. None of these techniques took more than five minutes to learn. The gap between her and a casual user isn't a secret — it's practice. She uses ChatGPT for 20+ tasks per day, and each task teaches her something about what works. After six months, her prompts are refined through hundreds of iterations. The $500/hour value isn't in the prompts themselves. It's in knowing which prompts to write for which problems — and that comes from using the tool every single day.
Key takeaways
- Advanced chain-of-thought: use self-consistency (three reasoning paths), structured frameworks, and explicit assumption identification
- Few-shot learning is most powerful when examples include edge cases that teach nuanced judgment
- Custom Instructions set once and shape every conversation — invest 10 minutes setting them up well
- Manage context actively: refresh key information in long conversations, summarize periodically, front-load critical instructions
- Multi-turn strategies (iterative refinement, persona switching, self-critique) produce dramatically better output than single-prompt approaches
- The meta-prompt technique: ask ChatGPT to improve your prompts before you run them
Knowledge Check
1.What is self-consistency prompting?
2.What makes advanced few-shot learning more effective than basic few-shot?
3.What is the 'lost in the middle' phenomenon in language models?
4.What is a meta-prompt?