How to Use AI Without Sounding Like a Robot
Most AI outputs sound generic because you're prompting wrong. Here's the four-element framework we use to generate content that actually sounds human.
You can always tell when something was written by AI. That overly polished, tryhard, corporate voice that sounds like every other AI-generated piece on the internet. The difference between generic outputs and content that sounds like you? How well you brief the tool. Here's the framework.
The Context Problem
I spent two hours last week arguing with Claude about a single sentence.
Not because it was wrong, exactly. More because it kept giving me technically correct but soulless versions of what I was trying to say. "Drive synergistic alignment across touchpoints" when what I meant was "get everyone on the same page."
The thing about AI tools is they're incredibly powerful, but only if you know how to talk to them. And most people don't. They treat prompts like Google searches, get mediocre results, and conclude that AI is overhyped.
Six months ago, I was getting the same generic outputs everyone else was getting. Then I figured out what was going wrong.
Why Your AI Outputs Feel Generic
Ask AI to "write a cold email" and you'll get something that looks professional and reads like every other AI-generated cold email on the planet. Generic. Too polished. Forgettable.
The issue isn't the AI. It's what you're feeding it.
Most people treat prompts like Google searches. A few words, hit enter, hope for magic. But AI needs two things: context about your specific situation, and clear instructions on how to use that context.
Here's what this looks like in practice.
We were generating personalized opening lines for a client selling to the aviation and aerospace industry. First batch of outputs? Generic garbage:
"Have you considered how a custom booth design could showcase your engineering innovations more effectively at industry events?"
Could've been written for any company in any industry. It had nothing about who the client actually was, what made them different, who they were targeting, or what the prospects cared about.
So we added context: the client's business model, their value prop, competitive landscape. Better, but still flat. The AI had the information but didn't know what to do with it.
Then we added another layer: detailed research on each prospect's specific business and what they actually sell. Same AI model, same campaign, but suddenly the outputs got specific:
"Have you considered how to make your precision NXT sensor technology's complexity feel intuitive and unmatched to aerospace buyers?"
Now it's talking about their specific product. It's about the prospect, not our client.
That's the shift: stop thinking of prompts as commands. Think of them as briefs. You're not barking orders at an intern. You're briefing someone who has access to all the information you have, but zero judgment about what matters.
The Project-Based Approach
Here's what I wish someone had told me from the start: stop thinking in one-off prompts. Start thinking in projects.
Both ChatGPT and Claude let you create separate workspaces where the AI remembers context across multiple conversations.
Every client gets their own project. Inside that project, we create different threads for different purposes: research tasks, personalization work, copywriting, strategy conversations.
Then we upload what I call "knowledge files":
Client briefs explaining who they are and what they do
ICP documents showing exactly who we're targeting
Examples of past emails that performed well
Framework docs explaining our process
Performance data from previous campaigns
Before we started organizing into projects, every new request meant re-explaining everything. Who's the client? What's their voice? What have we tried? What worked?
Exhausting. Inefficient. And we'd forget details between conversations.
Now all that context lives in the project. Multiple people can work on the same client without stepping on each other. You can reference past work without digging through chat history trying to find that one conversation from two weeks ago.
What Actually Works: The Four-Element Prompt Structure
Good prompts aren't about being clever. They're about being specific. Almost annoyingly specific.
When we need AI to generate personalized suggestions for prospects, we don't just ask it to "personalize this email." We structure prompts around four elements:
1. Context: Set the stage
Don't just say "write an email." Explain what the email is for, who it's going to, what you want them to do after reading it. The AI needs the bigger picture to make smart decisions about tone, angle, and approach.
2. Desired outcome: Be ridiculously specific
Not "write something personal" but "suggest one specific, actionable idea this prospect could implement based on their business context. Keep it under 20 words. It should sound natural in conversation and give them a reason to reply."
Include format requirements. Word counts. Structure preferences. Reading level. Whether you want bullets or paragraphs. The more specific you are upfront, the less time you spend on revisions.
3. Constraints: Define what to avoid
This is huge. Tell the AI exactly what NOT to do. No generic compliments. No references we can't verify. No phrasing that sounds like we're trying too hard. Don't use these specific phrases. Don't reference these topics.
The "don't do this" section is often longer than the "do this" part. Negative constraints keep the AI from going off the rails in predictable ways.
4. Examples over explanations
You know what works better than saying "write in a conversational but professional tone that's engaging but not too casual"?
Showing 3-4 examples of content that nails exactly what you want.
The AI picks up on patterns fast. Sentence length. Use of questions. Formality level. Industry jargon versus plain English. Show it what good looks like, and you'll get closer to what you actually wanted.
Think of it like teaching the AI your standards. The more specific your guardrails, the less time you spend fixing what comes back.
Why Human Review Is Non-Negotiable
AI doesn't write our final anything. It drafts ideas, helps with research, suggests angles. But every single thing that goes out gets reviewed and adjusted by an actual human.
We learned this the hard way.
AI once suggested an auction featuring community leaders for a Black community nonprofit. Another time it recommended a "chopsticks-themed" campaign for an Asian cultural organization.
Both would've been disasters if they'd gone out unchecked.
Now we run everything through multiple filters before it reaches clients.
The review system:
We don't review every single email—that would be impossible at scale. We review every email variation. A sequence might have three steps with four variations on step one, three on step two, and one on step three. We check all variations to ensure the AI output matches our template and the copy we're using.
But the real quality control happens earlier, during the AI enrichment phase:
Keyword filters flag outputs containing words we want to avoid
AI reviews AI—we run outputs through a content safety review model
Human review for anything flagged by either system
Any output that gets flagged gets rewritten by hand. We're especially careful with cultural sensitivity, tone matching, and avoiding anything that could come across as patronizing or tone-deaf.
The key is knowing when to use AI and when to trust your own instincts. AI can accelerate research and help polish copy, but it can't replace understanding your audience or knowing what message will actually resonate.
What Still Needs Human Judgment
Strategy decisions. What campaigns should we run? Who should we target? How should we position this?
Anything involving tone or cultural sensitivity. AI will give you technically correct suggestions that completely miss the mark on appropriateness.
Client relationships. AI can help you prep, but it can't replace understanding someone's actual needs. And for the love of God, don't use AI to reply to interested prospects and book calls for you. The entire sales cycle should be handled by humans.
Creative judgment. Knowing when something is good enough versus when it needs another pass. Recognizing when an idea is clever but wouldn't actually work in practice.
How to Implement This (Starting Today)
If you're reading this thinking "okay but how do I actually implement this," here's where to start:
First, document your business and processes. Before you even open Claude or ChatGPT, create a document that explains who you are, what you do, how you operate, and what matters to your business. How detailed? Depends on complexity, but aim for at least 2-3 pages. Include your value proposition, how your business works, who your ideal customers are, what's worked in the past, what hasn't. This becomes the foundation. Garbage in, garbage out.
Create a project in your preferred AI tool. Most people skip this and wonder why outputs feel random. Don't skip it. This is your workspace.
Upload your context files. Drop that business document in, along with anything else that helps the AI understand your situation. Brand guidelines. Past work examples. Customer data. Whatever's relevant.
Start with one specific use case. Don't try to use AI for everything at once. Pick one thing—research summaries, first draft emails, whatever—get good at prompting for that specific thing, then expand. Your first prompts won't be perfect. You'll iterate. Test different approaches. Refine your instructions. That iteration is where the actual quality comes from.
Review everything before it goes out. AI makes mistakes. It invents facts. It misses nuance. A human needs to check. Always.
The difference between generic AI outputs and something that actually sounds like you comes down to how well you brief the tool. Most people underestimate how much context and instruction AI needs to produce anything worth using.
But once you get the prompting right? It's like having someone who can draft at your speed, think through angles you might've missed, and handle the grunt work while you focus on strategy and judgment calls.
That's the whole point. Not to replace thinking. To amplify it.
Recap
Generic AI outputs happen because of weak prompts. Fix that by giving AI four things: context, specific outcomes, constraints, and examples. Use projects to maintain consistent context. Review everything. And keep humans in the loop for strategy, relationships, and judgment calls. AI should amplify your thinking, not replace it.


