Most people who are disappointed with AI tools are not using the wrong ones. They are using the right ones wrong.

That is a meaningful distinction, because the fix is different. If it were a tool problem, the answer would be: switch tools, find a better one, wait for the technology to improve. If it is an interaction problem, the answer is closer at hand than that.

These three principles are not about prompt engineering in the technical sense. They go deeper, to the shape of how you engage. Change that shape, and the tools you probably already have start producing results that are genuinely different.


The model has no idea who you are

Every time you open a fresh conversation with an LLM, you are talking to something that has never met you before. It has no memory of the conversation you had last week. It does not know what you do for a living, what you are building, who your audience is, what constraints you are working under, or what a useful answer would actually look like for your specific situation.

If you do not tell it, it will produce something generically useful for a generic person.

For some things, that is fine. If you want to know how photosynthesis works or how long to cook a leg of lamb, generic is entirely adequate. But if you are using an LLM for anything that requires judgment — anything where your specific circumstances matter, where the difference between a decent answer and a genuinely useful one is that the useful one is built around your actual situation — generic is the ceiling.

The fix is not complicated. Before you ask the question you actually want answered, spend thirty seconds telling it who you are and what you are trying to do. Not in exhaustive detail. Enough that it stops responding to a generic version of your problem and starts responding to the real one.

I run a consulting and training practice. When I am working through something practice-related, I tell the model that. I tell it who the client is. I tell it what I am trying to achieve. I tell it what would make a response genuinely useful as opposed to merely accurate. That context produces dramatically different output — not because of a trick, but because I stopped asking generic questions.


One-shot use is the least valuable use

Most people interact with LLMs the same way they interact with a search engine. They type a query, read the result, and decide whether it is good enough. The interaction ends there, and they either use it or start again from scratch.

That is the least valuable way to use a conversational tool.

The model does not know what it missed. It cannot adjust unless you tell it to. When you read a response, decide it is not quite right, and silently start a new query, you throw away the entire context of that exchange and begin again — which means the next response has no more to work with than the first one did.

The shape that produces real value is iterative. You read the response and you react to it. That second point — go deeper on that. The first section missed something; here is what I actually needed. The last suggestion is the one that landed; build from there.

The first response is a draft. The draft opens a conversation. The conversation is where the work gets done.

This is less comfortable than one-shot use because it requires active engagement rather than passive evaluation. You are not sitting back and judging whether the output is good enough. You are in it, shaping it, directing it. That shift is where the value is.


What you ask shapes what you get, more than you think

If you ask a vague question, you get a useful answer to a vague question. If you ask a specific one, you get something specific and actionable. That sounds obvious. It is, until you notice how rarely people actually ask specific questions.

“Help me write a better email” produces generic email-writing advice. “I am following up with a client who went quiet after I sent a proposal three weeks ago. I want to reopen the conversation without sounding desperate and without putting pressure on them. Help me write a short, confident message that does that” produces something you could actually send.

But the more interesting point is not about output quality. It is about what happens when you try to formulate the specific question.

To ask a specific question, you have to know what you actually want. That sounds trivially obvious. It is surprisingly uncommon. Most people have a vague sense of the problem they are trying to solve, a vague idea of what help might look like, and a vague expectation of what good output would be. The act of writing a precise question forces clarity that was not there before. Frequently, the most valuable thing that happens in a conversation with an LLM is what you figure out on the way to asking it a good question.

That is not a limitation of the tool. It is what happens when any thinking tool is used well.


These three things are not difficult. They do not require technical knowledge or any fluency with how these systems work under the hood. They require a different posture toward the interaction — not passive consumer waiting to see if the output is good enough, but someone who shows up with real context, engages with what comes back, and thinks clearly about what they actually need.

The interesting question, once you have these three in place, is what you actually want to do with the access they open up. That is a different question entirely. And it is worth asking.

Leave a Reply