We all use AI every day, either to write an email, summarize a doc, or generate an image. And it works. Mostly.
You type a prompt, and you get an image back.
It’s not wrong, but it’s not it either. The lighting feels off, the mood isn’t right, and the details don’t match what you were picturing.
So you tweak a word or two. Maybe try once more. Then you settle.
Most people quit right there. They blame the tool. That gap isn’t about the model being bad. It’s about how we describe what we see in our heads.
And today, we are closing that gap.
We will see why one-shot prompts underperform, and how iteration, feedback, and structure turn “almost right” answers into outputs you can actually use.
Let’s go!
The Models Changed. Did You?
Google just dropped Nano Banana Pro.
It’s built on top of Gemini 3 Pro, their most capable model yet. But what makes it good is that it thinks while it generates images.
Not metaphorically. Literally.
When you ask it to create an orthographic blueprint of a building (plan, elevation, section views), it doesn’t just guess.
It uses chain-of-thought reasoning.
It creates an initial render. Then it goes through what it made. Checks if the user’s requirements are there. Adjusts. Outputs a final version.
That’s a collaborator. And if you are still prompting like it’s 2023, you are leaving most of that capability on the table.
Why “Just Describe What You Want” Falls Flat
We are told generative AI is like a genie. Give it a wish, and it delivers.
That’s half-true. A low-effort prompt gets a low-effort answer.
But the issue isn’t just specificity. It’s a process.
The best results don’t come from single prompts. They come from iteration. A conversation. Not a command. Check this, for example:
Prompt 1: “A person sitting alone at a desk working on a laptop in a room.”
Output:

Prompt 2: “Same scene. Late night, only light source is the laptop screen illuminating their face. Coffee cup beside them. City lights visible through window behind.”
Output:

Prompt 3: “Add rain on the window. Reflection of code on the window glass. Lo-fi aesthetic, muted colors except for the warm laptop glow. Cinematic aspect ratio 21:9.”
Output:

Each loop makes your instructions sharper. In every loop, the model “learns” your taste within that conversation. This is what prompt engineering actually is.
It’s not a magic formula but an iterative design process.
Common Mistakes I See People Make
Vague prompts. “Make it better” tells the model nothing. Say what to improve and how.
No examples. You are asking the model to read your mind. Give it one or two examples of what “good” looks like.
Too much at once. If your prompt is 500 words with six different asks, break it up. Or even better, keep one task per prompt.
No feedback loop. The first output is rarely the best. “Try again, but...” is the most powerful phrase in prompt engineering.
Assuming it knows context. It doesn’t remember your last conversation. It doesn’t know your industry. State what matters.
The Anatomy of a Prompt That Works
There’s a reason most prompts underperform because there are missing pieces. However, a good prompt has four components:

Context: Why are you asking? What’s the situation?
Instruction: What specific task should the model perform?
Input data: What raw material are you giving it to work with?
Output indicator: What format do you want the answer in?
Here’s a weak prompt:
“Summarize this article.”
Here’s the same prompt, fixed with the listed components:
“You’re a business analyst preparing a briefing for executives. Summarize the following article on Tesla’s India policy. Use the pyramid principle: start with the conclusion, then key arguments, then supporting details. Output should be under 200 words. Here’s the article: [paste]”
Same task. Completely different result. The first prompt makes the AI guess what you want. The second tells it exactly how to think.
The Checklist (Steal This)
Before you write the next prompt, run through this:

Goal defined? What exactly do you want the output to be?
Format specified? Table, paragraph, bullet points, CSV?
Role assigned? “Act as an analyst” changes how the model approaches the problem.
Audience clarified? A 10-year-old and a PhD need different explanations.
Examples included? Show, don’t just tell. One good example beats three paragraphs of instructions.
Restrictions stated? Word limits, what to avoid, what not to include.
Style indicated? Formal, casual, technical, conversational?
You don’t need all seven every time.
But the more complex your task, the more of these you need.
Three Prompt Patterns That Work
1. Persona Pattern
Tell the model who to be.
“Act as a yoga instructor. Create a beginner-friendly routine for joint mobility.”
“Act as a skeptical investor. Poke holes in this business plan.”
The persona shapes the response. A “yoga instructor” gives different advice than a “physical therapist,” even for the same question.
2. Recipe Pattern
You know the task has steps. Tell the model what they are.
“I want a complete travel itinerary from Bangalore to Darjeeling. I know I’ll need to fly to Bagdogra, then take ground transport. Please fill in the details, timings, and options.”
Instead of asking AI to figure out the structure, you are asking it to fill in the details. That makes it much easier for the model to get right.
3. Template Pattern
Give the model placeholders. Let it fill them in.
“Generate a day-by-day travel itinerary for Paris. Use this format for each day: Day [X]: Visit [location] at [time] for [activity].”
Now the output is predictable. Easy to scan. Easy to use.
Zero-Shot, Few-Shot, Chain-of-Thought
Umm. Sounds technical, doesn’t it? But they are not.
They are types of prompting techniques.
Zero-shot:
No examples. Just tell it what to do.
“Classify this feedback as positive, negative, or neutral: ‘The delivery was late but the product was fine.’”
Works for simple tasks. Falls apart on complex ones.
Few-shot:
Give it examples first. Then ask.
“Example 1: ‘I loved it!’ → Positive Example 2: ‘Terrible experience.’ → Negative Example 3: ‘It was okay.’ → Neutral. Now classify: ‘Not bad, but I expected more.’”
The model learns the pattern from your examples. Much more reliable.
Chain-of-thought:
Show your reasoning. Ask it to do the same. This is the best one for logic problems, math, and anything that involves steps in the process.
“If Michael says Patrick’s mother is the only daughter of my mother, how is Michael related to Patrick?
Let’s think step by step: Patrick’s mother is Michael’s mother’s only daughter. That means Patrick’s mother is Michael’s sister. So Michael is Patrick’s uncle.”
When you model the reasoning, the AI follows.
Without it, these problems often fail.
Where the Models Are Actually Good Now
Gemini 3 Pro hit 1500+ on the LM Arena leaderboard.
It is the first model to do that. But benchmarks don’t tell you what matters. Here’s what I have seen actually work:
UI generation.For the first time, AI-generated web designs don’t look like templates. Single-shot landing pages felt like a human designer designed them.
Temporal consistency.Ask for an animation of a crowd forming words as the camera shifts to a bird’s-eye view. Previous models couldn’t, but this one can.
Image iteration.If you say to change the weather to winter, it freezes the lake, adds snow, and adjusts lighting without breaking the rest of the image.
Code that works.You can literally make it a 24/7 assistant. Rubik’s Cube solvers. Physics simulations. Playable Minecraft clones. Everything in a single prompt.
The ceiling has shifted.
The question is whether YOU are still operating at the old floor.
The Real Skill
Prompt engineering isn’t about memorising formulas.
It’s about thinking clearly about what you want, then communicating it in a way a very capable, very literal machine can understand.
The models will keep getting better. The people who learn to work with them will pull ahead of those who keep typing vague wishes and hoping for magic.
The gap between “AI is a toy” and “AI is a competitive advantage” isn’t the model.
It’s the prompt. And the willingness to treat it like a skill worth developing.
If this was useful, drop a like. Got a prompt that flopped or one that worked surprisingly well? Reply below. I read everything.
Until next time,—Sid
