How to Write AI Content That Doesn't Read Like Slop (My Claude Code Setup)
The CLAUDE.md banned phrase list, source verification rules, and 7-step content workflow that produced 224 ChatGPT citations on a brand-new domain. With the exact config file.
Key Takeaways
- 52% of new articles online are AI-generated (Futurism, 2025), and most of them share the same voice, vocabulary, and rhetorical patterns because models train on each other's output
- A CLAUDE.md config file is the fix, not better prompting. It applies editorial rules to every output automatically, without you remembering to paste instructions
- A banned phrase list of 20+ AI patterns forces Claude Code to find alternative phrasing that doesn't match what readers have learned to associate with AI-generated content
- A source verification rule stops most fabricated statistics: every claim needs a URL and date, or it gets removed
- The workflow produced 224 ChatGPT citations on a brand-new domain (creatorflow.so) in 90 days, with 14.4K Google clicks and 1,000+ waitlist signups
- Human editorial passes are non-negotiable. AI handles structure, research, and source verification. Humans handle voice, proprietary data, and judgment
52% of new articles online are AI-generated (Futurism, 2025). Most of them sound like the same person wrote them.
I use Claude Code to produce 2-3 long-form articles per week across multiple sites. AI handles research, structure, source verification, and first drafts. I handle voice, data, and editorial judgment. Readers don't flag the output as AI. Detection tools don't either.
This is the full breakdown. The config file, the banned phrase list, the skills and agents that handle research, and the line I draw between what AI does and what stays human.
I'll also show you two real articles this system produced, with the data to back up why it works.
Why All AI Content Sounds Like the Same Person
AI content sounds the same because models learn from the internet, and the internet is now full of AI writing. Each new model trains on the last model's output. The voice gets flatter with every generation. The output converges toward a stylistic mean that sounds like nobody and everybody at once.
You can probably spot it by now. These patterns show up in roughly every other AI-generated article:
| Pattern | Example | What Gives It Away |
|---|---|---|
| False tricolon | "No fluff. No theory. Just results." | Claude and ChatGPT's favorite trick |
| Em-dash overuse | "The problem — and it's a big one — is..." | One per article is fine. Five per paragraph is a machine |
| Corporate filler | "leverage," "robust," "streamline" | Padding when the model has nothing specific to say |
| Hedging phrases | "It's worth noting that..." | AI avoids taking a position; real experts don't |
| Dramatic hooks | "Enter: Claude Code." / "And here's the kicker" | Fake spontaneity that replaces actual narrative |
| Question openers | "The best part?" / "Want access?" | Every model's default section transition |
| Arrow lists | "Step 1 → Do this → Get that" | A formatting habit almost no human has |
You can't fix this with a better prompt. A single instruction can't override patterns learned from billions of words. You need rules that apply to every output, every time, without you remembering to paste them.
The Config File That Makes the Difference
Claude Code reads a CLAUDE.md file at the root of every project. Think of it as a permanent instruction manual. It's not a prompt you paste into a chat. It's a file the tool reads automatically, every time, on every task.
You write your content rules once. Claude follows them on every article after that.
Here's what the content section of my production CLAUDE.md looks like:
## Banned Language
- Em-dashes
- Filler: just, very, actually, basically
- Corporate: leverage, robust, scalable, streamline, delve
- Hedging: "It's worth noting", "You may want to consider"
- AI cliche structures:
- "No X. No Y. Just Z."
- "It's not just about X. It's about Y."
- "game-changer" / "supercharge"
- "Enter: [thing]"
- "And here's the kicker"
- "X changed everything"
- Arrow formatting for lists
- "The best part?" / "Want access?"
- "If you're serious about X, [CTA]"
- "To your success" sign-offs
- "Not because of X. But because of Y."
## Writing Style
- Lead with data, not opinions. Every claim needs a source.
- No filler intros ("In today's digital landscape..."). Start with the point.
- 2-3 sentences per paragraph maximum.
- No sentences over 30 words.
- Active voice by default.
- Target Flesch-Kincaid grade 8-10.
The trick is being specific, not vague. "Write in a natural tone" does nothing. "Never use the phrase 'It's worth noting'" is a rule Claude Code actually follows.
When you ban 20+ specific patterns, the model has to find other ways to say things. Those alternatives sound less like AI because they don't match the patterns readers have learned to spot.
Stop doing marketing ops by hand
AI opportunity audits, custom automation blueprints, and implementation roadmaps for teams ready to eliminate manual ops work.
Free 45-minute discovery call
The Rule That Stops AI From Making Things Up
AI fabricates statistics. Not on purpose, but because models generate things that sound true even when they aren't. "73% of marketers say..." with no study, no URL, no source. Confident nonsense.
One CLAUDE.md rule fixes most of this:
## Source Verification Protocol
- Every external claim requires a source with URL and date
- Citation format: "81% of SEOs prioritize AI (Source, Month Year)"
- If a claim cannot be verified:
1. Remove the claim, OR
2. Reframe: "Many SEOs report...", OR
3. Mark [NEEDS VERIFICATION] for manual research
- Never guess. Never fabricate statistics.
Here's the difference in practice:
Without the rule:
Studies show that 73% of content marketers are now using AI tools to accelerate their workflow, making it more important than ever to stand out.
With the rule:
52% of new online articles are AI-generated as of mid-2025 (Futurism, 2025). On YouTube, 21-33% of recommended content qualifies as AI slop, generating $117 million in annual ad revenue across 278 synthetic content channels (Search Engine Journal, 2025).
The first version sounds confident but says nothing verifiable. The second has paper trails. Readers trust it because they can check it. AI answer engines cite it for the same reason.
How an Article Actually Gets Made
Let me walk through the real process. Not a theoretical workflow. The steps I follow every time.
Step 1: Research With a Content Brief
I start by running a /content-brief skill. This analyzes the top 10 Google results for my target keyword. It pulls their heading structures, word counts, and topic coverage. Then it builds a brief: what to write, how to structure it, and what gaps exist in the content that already ranks.
claude /content-brief "instagram dm automation"
Behind the scenes, Claude Code launches a deep-web-researcher subagent. This agent runs multiple web searches in parallel, cross-references what it finds, and returns structured data with URLs and publication dates. It does in 2 minutes what would take me 45 minutes of tab-switching.
Step 2: Check for Cannibalization
Before writing, I verify the target keyword isn't already covered on my site. The /cannibalization skill checks Google Search Console data and flags conflicts.
claude /cannibalization --gsc-data ./gsc-export.csv
This step has saved me from writing articles that would have competed with my own pages. Sounds obvious, but it's the kind of check most people skip because it takes time. With a skill, it takes 30 seconds.
Step 3: Draft With All Constraints Active
The /copywriting skill generates the draft. Because the CLAUDE.md file is always active, the draft automatically follows the banned language list, the citation protocol, the paragraph limits, and the readability targets.
The draft comes out structured like this:
- Every H2 section opens with a 50-70 word "citation block" written in factual, third-person tone. AI search engines pull these as source material.
- Every stat has a source in parentheses.
- Paragraphs are 2-3 sentences. No exceptions.
- No filler intros. Sections start with the point.
Step 4: Fact-Check With a Dedicated Agent
After the draft is ready, I launch a fact-checker subagent. It takes every claim in the article that references external data and independently verifies it. If it can't find the source, it gives me three options: remove the claim, reframe it ("Many marketers report..."), or mark it for manual research.
This runs in the background while I start the human editorial pass.
Step 5: The Human Pass (This Part Can't Be Skipped)
This is where the article stops being a good AI draft and becomes something worth publishing.
I read every paragraph. I add:
- My own data. Screenshots from Google Search Console, Ahrefs dashboards, or terminal outputs from real Claude Code sessions. Nobody else has my data.
- Opinions that require judgment. "Are these numbers good? Here's what I think and why." AI can provide data. It can't evaluate data.
- Things that went wrong. What didn't work. What surprised me. What I'd do differently. This is the part readers remember.
- Voice. The way I phrase things. Short. Direct. Sometimes blunt. The CLAUDE.md constrains AI's bad habits, but voice comes from the human.
Step 6: SEO and Schema Checks
After the human pass, I run two more skills:
/schema-gen creates Article and FAQ schema (JSON-LD structured data). Every article gets this. It helps both Google and AI search engines understand what the page covers.
/seo-audit crawls the page and checks title tag length, meta description, heading hierarchy, canonical URLs, and structured data validation. It's the final gate.
Step 7: Track AI Citations After Publishing
A week or two after publishing, I run /ai-visibility to check whether the article shows up in AI-generated answers across ChatGPT, Perplexity, Google AI Overviews, and Grok.
This step closes the loop. If the article isn't getting cited, I can see what's missing and adjust the content.
Two Real Articles This System Produced
Here are two articles made with this exact workflow, with results you can verify.
Example 1: Instagram DM Automation Guide
Article: Instagram DM Automation: Setup in 10 Min
This is a comprehensive guide for CreatorFlow, an Instagram DM automation tool. It covers setup steps, pricing comparisons, tool breakdowns, and use cases.
What the workflow handled:
/content-briefidentified gaps in existing DM automation guides (most were outdated or focused on a single tool)- The draft followed all CLAUDE.md constraints: no filler, sourced claims, short paragraphs, citation blocks under every H2
/schema-genadded FAQ and Article schema- Human pass added: real pricing data, tool-specific screenshots, the author's take on which tool fits which use case
The CreatorFlow domain earned 224 ChatGPT citations across 61 pages within 90 days, and this article was one of the highest-performing pieces in the set.
Example 2: Content Strategy Case Study (With Real Numbers)
Article: How Claude Code Helped Us Get 1,000+ Waitlist Signups in 2 Months
This one shows the workflow's results on a new domain (creatorflow.so). The numbers, for a brand-new site with zero backlinks:
| Metric | Result |
|---|---|
| Waitlist signups | 1,000+ in 2 months |
| ChatGPT citations | 224 across 61 pages |
| Gemini citations | 49 across 57 pages |
| GSC clicks | 14.4K in 2 months |
| GSC impressions | 1.68M |
| Domain Rating | 25 (new domain) |
| Articles published | ~40 over 12 weeks |
The majority of waitlist signups came from AI search referrals, not traditional Google organic. ChatGPT and Perplexity cited the content, users asked follow-up questions about the tool, and those conversations converted to waitlist visits.
What made this work wasn't the volume (though 40 articles in 12 weeks matters). It was the structure. Every article opened sections with citation-ready blocks. Every claim had a source. The CLAUDE.md rules prevented slop from creeping in, even at that publishing pace.
Without the workflow, that volume would have required either a team of writers or a pile of generic AI content nobody would cite. Claude Code let a solo founder produce structured, citation-ready articles at a pace that landed 224 ChatGPT mentions in 90 days on a domain nobody had heard of before.
Stop doing marketing ops by hand
AI opportunity audits, custom automation blueprints, and implementation roadmaps for teams ready to eliminate manual ops work.
Free 45-minute discovery call
What Stays Human (Non-Negotiable)
Some content should never be delegated, no matter how good the CLAUDE.md rules are:
| What Stays Human | Why |
|---|---|
| Your own data | GSC exports, GA4 screenshots, terminal outputs. Only you have this |
| Evaluative judgment | "Are these numbers good?" requires context no model has |
| Personal experience | What failed. What surprised you. What you'd skip next time |
| Strategic decisions | What to write about and why it matters right now |
| Voice | The way you phrase things that readers recognize across articles |
Google calls this "Experience" in their E-E-A-T framework. It's the hardest quality signal to fake because it requires proof that the author has done the thing they're writing about.
The CreatorFlow case study is a good example. Anyone can write about content strategy. Only someone who built CreatorFlow can share its actual Ahrefs dashboard, GSC data, and waitlist numbers. That data is the article's defensibility. No competitor can replicate it.
How to Start (25 Minutes)
If you want to try this, you need four things in your CLAUDE.md:
- Banned language list (10 min). Copy the one from this article. Add or remove phrases to match your voice. 15-20 patterns minimum.
- Source verification rule (5 min). The protocol shown above. This single rule stops most hallucinated statistics.
- Structural constraints (5 min). Max 2-3 sentences per paragraph. 30-word sentence cap. Active voice. Sections lead with facts.
- Human-only boundaries (5 min). Write down what you always add yourself: your data, your opinions, your experience. This prevents you from gradually outsourcing the parts that make your content unique.
That's a working setup. The banned list and source verification handle the 80% that matters most.
The Results So Far
Since putting this system into production:
- Zero articles flagged by AI detection tools
- Brief-to-published time dropped from 6 hours to 90 minutes per article
- Every article carries 3-5 verified external sources with URLs
- 224 ChatGPT citations on a brand-new domain in 90 days
- The AI draft is good enough to publish as-is. The human pass adds value on top instead of fixing problems underneath
Every article still gets a full human read. I still add data, commentary, and judgment by hand. The AI handles the parts that benefit from speed and consistency: structure, research scaffolding, source verification, schema markup. The human handles the parts that benefit from experience: voice, data, and editorial judgment.
The internet doesn't need more content. It needs more content that was worth writing. A well-configured CLAUDE.md is how you make AI help with that instead of adding to the noise.
FAQ
What is AI slop?
AI slop is low-quality digital content produced in quantity by artificial intelligence with minimal human oversight. Merriam-Webster named "slop" its 2025 Word of the Year, defining it as content that floods platforms with generic, repetitive material that adds little value for readers. As of mid-2025, 52% of new online articles are AI-generated (Futurism, 2025).
What is a CLAUDE.md file?
A CLAUDE.md file is a plain-text configuration file placed at the root of a Claude Code project. It contains persistent instructions that Claude Code follows on every task within that project. For content production, it functions as an automated editorial style guide, enforcing rules about language, structure, citations, and formatting without requiring manual reminders on each task.
Does Google penalize AI-generated content?
Google does not penalize content specifically for being AI-generated. Google's algorithms target low-quality content and scaled content abuse regardless of how the content was produced. A well-edited, source-verified AI-assisted article performs the same as a human-written one. The penalty risk comes from publishing low-quality AI output at volume without editorial oversight (Google Search Central, 2026).
How do you prevent AI from using cliche phrases?
The most effective method is a banned language list in your CLAUDE.md file. Instead of vague instructions like "write naturally," list specific phrases and patterns the AI cannot use. Claude Code follows explicit prohibitions reliably. Ban the false tricolon ("No X. No Y. Just Z."), dramatic introductions ("Enter: [thing]"), em-dash overuse, and corporate filler words. The AI adapts by finding more natural alternatives.
Can AI content get cited by ChatGPT and Perplexity?
Yes. AI search engines evaluate content based on direct answer quality, entity clarity, recency, and structured data. Content that opens sections with short factual blocks (50-70 words), defines entities consistently, and includes verifiable sources with dates is more likely to be cited. The CreatorFlow case study showed 224 ChatGPT citations on a DR 25 domain within 90 days using this approach.
How long does this Claude Code content setup take?
The initial CLAUDE.md configuration takes about 25 minutes: 10 minutes for the banned phrase list, 5 minutes for the source verification rule, 5 minutes for structural constraints, and 5 minutes for defining human-only boundaries. After that, the config applies automatically to every content task in the project.
Building AI-powered tools for marketing teams.
Your team has better things to do than pull reports
We run AI opportunity audits, design automation blueprints, and deliver implementation roadmaps for marketing teams. Get a free audit of your workflows.
Free 45-minute discovery call