Written by Beginners in AI Last updated: March 2026

The secret to getting great results from AI is simple: be specific about who you are, what you need, and how you want it delivered. A vague prompt like "write me an email" will get you generic slop. A specific prompt like "write a friendly 3-paragraph email to my team announcing our new hybrid work policy, using a warm but professional tone" will get you something you can actually send. The difference isn’t magic — it’s structure. And anyone can learn it in about 10 minutes from beginner AI guides.

This guide covers the exact formula that works across every major AI tool — ChatGPT, Claude, Grok, Gemini, and Perplexity — so you’ll never stare at a blank chat window wondering what to type again.

The 3-Part Prompt Formula: Who + What + How

Every great AI prompt has three ingredients. You don’t always need all three, but including them consistently will improve your results by an estimated 40-60%, based on prompt engineering research from OpenAI and Anthropic.

Part 1: WHO (Give the AI a Role)

Tell the AI who it should be. This sets the lens through which it processes your request.

  • "You are an experienced hiring manager..."

  • "Act as a patient math tutor for a 10th grader..."

  • "You are a professional copywriter who specializes in email marketing..."

Part 2: WHAT (State Your Task Clearly)

Be specific about what you need. Include the topic, the goal, and any critical details.

  • "Write a cover letter for a marketing coordinator position at a tech startup"

  • "Explain how compound interest works using a $1,000 example"

  • "Create a weekly meal plan for two adults, budget of $75, no seafood"

Part 3: HOW (Define the Format and Constraints)

Tell the AI how you want the output delivered. Length, tone, structure, format — spell it out.

  • "Keep it under 200 words"

  • "Use bullet points, not paragraphs"

  • "Write in a casual, conversational tone — no corporate speak"

  • "Include a table comparing the top 3 options"

10 Before-and-After Prompt Examples

These real examples show the difference between vague prompts and structured ones. Every "after" prompt uses the Who + What + How formula.

Example 1: Writing an Email

Bad prompt: "Write an email to my boss."

Good prompt: "Write a professional but friendly email to my manager requesting 3 days off next month (March 18-20) for a family event. Keep it to 4-5 sentences. Mention that I’ll finish the Q1 report before I leave and that Sarah can cover urgent items."

Why it’s better: The AI knows the relationship, the specific dates, the tone, the length, and the context. No guessing required.

Example 2: Research

Bad prompt: "Tell me about solar panels."

Good prompt: "I’m a homeowner considering solar panels for a 2,000 sq ft house in Texas. Give me a cost-benefit breakdown including average installation cost, monthly savings, payback period, and the main pros and cons. Use current data and organize it with headers."

Why it’s better: Instead of a generic Wikipedia-style overview, you get a personalized analysis you can actually use for decision-making.

Example 3: Learning a Concept

Bad prompt: "Explain blockchain."

Good prompt: "Explain blockchain technology to someone with zero technical background. Use an everyday analogy (like a shared notebook or ledger). Keep it under 200 words and avoid jargon. Then give me 3 real-world examples of how regular people encounter blockchain."Why it’s better: The AI knows your knowledge level, your preferred explanation style, your length limit, and what to do after the explanation.

Example 4: Meeting Preparation

Bad prompt: "Help me prepare for a meeting."

Good prompt: "I’m meeting with a potential client (a mid-size accounting firm with 50 employees) to pitch our project management software. Create a list of 8 questions I should ask to understand their needs, and 5 common objections they might raise with suggested responses."

Why it’s better: Specific context about the client, the product, and the exact deliverables you need.

Example 5: Resume Help

Bad prompt: "Fix my resume."

Good prompt: "Review this resume for a senior marketing manager role. Focus on: (1) making the bullet points more results-oriented with specific numbers, (2) removing any weak or vague language, and (3) suggesting 3 skills I should add based on current job market trends. Here’s my resume: [paste resume]"

Why it’s better: Three specific focus areas instead of a vague "fix it" that could go in any direction.

Example 6: Social Media Content

Bad prompt: "Write a LinkedIn post."

Good prompt: "Write a LinkedIn post (150-200 words) sharing a lesson I learned about managing remote teams. The lesson: daily standups don’t work for every team — async check-ins (written updates instead of meetings) increased my team’s productivity by 30%. Tone: thoughtful and conversational, not preachy. End with a question to drive comments."

Why it’s better: Specific topic, word count, tone guidance, a concrete data point, and a structural request for engagement.

Example 7: Data Analysis

Bad prompt: "Analyze this data."

Good prompt: "I’m uploading a spreadsheet of our Q4 sales data. Identify the top 3 performing products by revenue, any month-over-month trends, and products that declined more than 10%. Present findings in a summary table, then write 3 key takeaways my VP would care about."

Why it’s better: Specific analysis criteria, a clear output format, and an audience awareness that shapes the takeaways.

Example 8: Creative Writing

Bad prompt: "Write a story."

Good prompt: "Write the opening scene (300 words) of a mystery story set in a small-town bookshop. The protagonist is a retired librarian who finds a coded message inside a donated book. Tone: cozy mystery, not dark or violent. Write in third person, past tense."

Why it’s better: Genre, setting, character, plot hook, tone, length, and point of view are all specified.

Example 9: Brainstorming

Bad prompt: "Give me business ideas."

Good prompt: "I’m a graphic designer with 10 years of experience, $5,000 to invest, and 10 hours per week of free time. Give me 7 side business ideas I could start within 30 days. For each, include: what it is, startup cost estimate, income potential in months 1-6, and the first 3 steps to get started."

Why it’s better: Your background, budget, time constraints, and desired output format eliminate generic suggestions.

Example 10: Feedback and Critique

Bad prompt: "Is this good?"

Good prompt: "Critique this product description for an online store selling handmade candles. Score it 1-10 on: clarity, persuasiveness, SEO potential, and emotional appeal. Then rewrite it to improve the weakest area. Here’s the description: [paste text]"

Why it’s better: A scoring framework forces structured feedback, and the rewrite request turns critique into action.

Want prompt templates you can copy and paste? Subscribe to Beginners in AI for our free weekly prompt playbook. Subscribe at beginnersinai.com

5 Advanced Techniques That Dramatically Improve Results

Once you’ve mastered the basics, these techniques will push your AI output from good to genuinely impressive. They work in ChatGPT, Claude, Grok, and all major AI tools.

1. Give It a Role (and a Backstory)

Go beyond "act as a marketer." Try: "You are a senior content strategist at a B2B SaaS company with 15 years of experience. You specialize in turning complex technical products into simple, compelling marketing copy. Your writing style is clear, confident, and avoids buzzwords."

The more specific the role, the more focused the output. Think of it as casting an actor — the more detailed the character brief, the better the performance.

2. Ask for Alternatives

Don’t accept the first answer. After any output, follow up with:

  • "Give me 3 alternative approaches"

  • "Now write a version that’s half the length"

  • "Rewrite this but make it more casual / formal / persuasive"

AI tools are extremely good at iterating. The first output is a draft, not a final product. According to a 2025 survey by the Nielsen Norman Group, users who iterated on AI outputs at least twice reported 52% higher satisfaction with results.

3. Chain Your Prompts (Multi-Step Workflows)

Break complex tasks into steps instead of asking for everything at once.

Instead of: "Write a complete marketing strategy for my bakery."

Try this sequence:

  1. "First, help me define my target customer. I run a sourdough bakery in Portland. Ask me 5 questions about my business."

  2. [Answer the questions]

  3. "Based on my answers, identify my top 3 customer segments."

  4. "Now create a marketing strategy focused on segment #1, with specific tactics for Instagram, local SEO, and email marketing."

Each step builds on the last, and the AI maintains context throughout the conversation. This technique works especially well in Claude, which has a larger context window (the amount of conversation it can remember) — up to 200K tokens (roughly 150,000 words).

4. Set Constraints That Force Creativity

Constraints make AI outputs more useful:

  • "Explain this concept using only words a 12-year-old would know"

  • "Give me a marketing plan I can execute with zero budget"

  • "Write this email in exactly 50 words"

  • "Provide only options that can be done in under 2 hours"

Without constraints, AI tends toward generic, middle-of-the-road responses. Constraints force specificity.

5. Ask It to Critique Itself

This is one of the most powerful techniques available, and most people never use it.

After getting any output, say: "Now critique your own response. What are the 3 biggest weaknesses, and how would you fix them?"

Then: "Now rewrite the original response addressing those weaknesses."

This self-critique loop produces significantly better output because it forces the AI to evaluate its work against higher standards. Claude is particularly strong at self-critique — Anthropic has specifically trained it to be honest about its limitations.

7 Common Mistakes That Ruin AI Prompts

  1. Being too vague. "Help me with marketing" gives the AI nothing to work with. What product? What audience? What channel? What budget?

  1. Asking multiple unrelated questions at once. "Write me a cover letter, plan my vacation, and explain quantum computing." Stick to one topic per conversation or clearly separate tasks.

  • Not specifying format. If you don’t say "use bullet points" or "organize with headers," you’ll get a wall of text.

  1. Forgetting to give context. The AI doesn’t know your situation unless you explain it. Your industry, experience level, audience, goals — share them.

  1. Accepting the first response. Always iterate. Say "make it shorter," "make it more specific," or "try a different angle." The second or third attempt is almost always better.

  1. Using AI-speak. You don’t need to say "As a large language model, please generate..." Just talk normally. "Write me a..." works perfectly.

  1. Not proofreading. AI output is a draft, not a finished product. Read it, check the facts, adjust the tone, and make it yours. According to a 2025 study by the Stanford Human-Centered AI Institute, AI-generated text contains factual errors in roughly 15-20% (learn more about AI accuracy and safety) of outputs on complex topics. Always verify.

Level up your AI skills every week — for free. Subscribe to Beginners in AI at beginnersinai.com

Quick Reference: The Prompt Formula Cheatsheet

Copy this template and fill in the brackets:

You are a [ROLE] with expertise in [SPECIFIC AREA]. I need you to [SPECIFIC TASK] for [CONTEXT/AUDIENCE]. Requirements: [FORMAT: bullet points / paragraphs / table], [LENGTH: word count or page count], [TONE: casual / professional / academic / persuasive], [ANY CONSTRAINTS: budget, time, skill level]. [Paste any reference material, data, or examples here]

This template works in ChatGPT, Claude, Grok, Gemini, Perplexity, and any other AI tool that accepts text prompts. Save it somewhere you’ll remember — it’s the single most useful thing in this entire guide.

Pro tip: Claude co-work lets you collaborate on documents in real time, and Claude Code can handle complex analysis tasks even if you’ve never written a line of code. Both available at claude.ai.

Frequently Asked Questions

Do I need to learn a different prompt style for each AI tool?

No. The Who + What + How formula works across all major AI tools. ChatGPT, Claude, Grok, Gemini, and Perplexity all respond to the same principles: be specific, provide context, and define your desired output. You might notice slight differences in personality (Claude is more cautious, Grok is more direct), but the prompting technique is universal.

How long should my prompts be?

As long as they need to be to communicate your request clearly — but not longer. Most effective prompts are 3-8 sentences. For complex tasks, a prompt of 100-200 words with clear structure (bullets or numbered requirements) tends to produce the best results.

Can I just tell the AI "make it better" if I don’t like the result?

You can, but "make it better" is vague. Instead, tell it exactly what to improve: "Make it shorter," "Use simpler language," "Add more specific examples," or "Make the conclusion stronger." Specific feedback produces specific improvements.

Is prompt engineering (the skill of writing effective AI prompts) a real job skill?

Yes. LinkedIn reported a 2,000%+ increase in job postings mentioning prompt engineering or AI proficiency between 2023 and 2025. Even if your job title never includes "prompt engineer," the ability to get great results from AI tools is increasingly valued across every industry.

What’s the fastest way to practice?

Pick one task you do every week — writing an email, summarizing a meeting, brainstorming ideas — and do it with AI using the formula from this guide. Within 2-3 weeks, structured prompting will become second nature.

Subscribe free at beginnersinai.com for daily AI news and tips.

Reply

Avatar

or to participate

Keep Reading