AI Can’t Read Your Mind — And Design Lives in Your Head

AI code design is where most experienced engineers quietly get stuck. Not because the code is wrong — but because it isn’t shaped the way they would have built it. The output works, tests pass, and yet something feels off. This article is about that gap between working code and intentional design, and why closing it with AI alone turns into a frustrating, non-converging loop.

Intro

If you’re building a real SaaS product right now, you already know this feeling.

You ask AI to implement something.
It responds instantly.
The code compiles.
Tests pass.
You skim it and think:

“Yeah… this works.
But that’s not how I would have built it.”

And then the real work starts.

Not fixing bugs.
Not adding features.

Explaining — again — how you actually want things shaped.

This article is about that gap.
Not because AI is weak.
But because design intent is hard, implicit, and mostly invisible.

And AI can’t read your mind.


This should be easier by now

Let’s get this out of the way: AI is legitimately good at writing code.

Most of the time it:

  • Uses the right language features
  • Picks reasonable abstractions
  • Produces something that runs

If this were 2018, we’d be losing our minds.

But here’s the uncomfortable part.

The pain hasn’t gone away.
It’s just moved.

Instead of fighting syntax, you’re fighting shape.
Instead of writing code, you’re correcting it.
Instead of designing once, you’re re-explaining yourself over and over.

The bottleneck isn’t generation anymore.
It’s alignment.


Why is it always almost right?

AI is really good at getting you 80–90% of the way there.

That last 10–20% is brutal.

Not because it’s technically hard.
But because it’s where all the judgment lives.

The code works.
But responsibilities are slightly off.
Naming feels wrong.
Logic lives in places you wouldn’t put it.
Boundaries are fuzzy.

Nothing is wrong enough to justify a rewrite.
But nothing is right enough to feel settled.

This is the worst possible state for a SaaS codebase.

Because “almost right” systems don’t fail loudly.
They just drift.


This isn’t a prompting problem

The first instinct is to blame yourself.

“If I just explain it better…”
“If I add more constraints…”
“If I give it more context…”

So you do.

Your prompts get longer.
You start writing little manifestos.
You add rules like:

  • “Don’t create extra helpers”
  • “Keep logic centralized”
  • “Follow existing patterns”

It helps.
A bit.

But it never fully converges.

That’s because prompts are a terrible medium for design intent.

Design isn’t a list of instructions.
It’s a mental model.
A set of tradeoffs.
A sense for what should not exist.

You know it when you see it.
AI doesn’t.


Design lives in the things you didn’t say

Here’s the part that hurts to admit.

Most of your design decisions are implicit.

You don’t say:

  • “Auth decisions must live here”
  • “This layer is not allowed to know that”
  • “This job can retry, this one cannot”
  • “We never branch on tenant inside domain logic”

You just don’t do those things.

That’s design.

AI doesn’t have access to your internal “no” list.
So it fills the gaps with reasonable defaults.

And reasonable defaults are where systems go to die.


Example: auth logic drifting everywhere

This one shows up constantly.

The naive version

You ask AI to add authorization checks.

It does what you’d expect:

  • Some checks in controllers
  • Some in middleware
  • A few inside service functions
  • Background jobs do their own thing

It’s all reasonable.
Nothing obviously broken.

The moment it breaks

You add a new plan tier.
Or a new role.
Or a new tenant rule.

You update most of the checks.
You miss one.

The symptom

Users can’t do something in the UI…
…but background jobs still do it.

Or worse:
A side effect runs for a user who shouldn’t have access.

Now you’re debugging “impossible” states.

The fix

You refactor.
You centralize authorization decisions.
You make it explicit where auth is allowed to happen.
You delete code.

The key insight:
AI didn’t know which place was allowed to decide.
So it decided everywhere.


Why fixing one thing keeps breaking the shape

Another familiar pattern.

You spot a bug.
You ask AI to fix it.
The bug goes away.

But now:

  • A helper got introduced
  • A boundary got crossed
  • Something that used to be obvious is now indirect

Nothing is technically wrong.
But the shape changed.

This happens because AI optimizes locally.

It fixes the problem in front of it.
It has no sense of architectural cost.
No memory of why things were shaped a certain way.

It doesn’t feel the pain of future changes.
You do.


Example: background jobs that “worked” until they didn’t

The naive version

AI generates background jobs that:

  • Fetch some models
  • Run business logic
  • Send emails
  • Update state

They look clean.
Readable.
Even elegant.

The moment it breaks

You add retries.
Or concurrency.
Or partial failure handling.

The symptom

Duplicate emails.
Double side effects.
State transitions that happen twice.

Now you’re grepping logs at 2am.

The fix

You refactor:

  • Idempotency boundaries
  • Explicit job contracts
  • Clear separation between “decide” and “execute”

You stop letting jobs “just do stuff”.

The insight:
AI optimized for clarity.
Production optimized for pain.


Observability is where this really shows

AI is great at adding logs.
It’s bad at knowing which questions you’ll need to answer later.

The naive version

Logs where they seem useful.
Metrics when something feels slow.
Tracing added after an incident.

The moment it breaks

A real production issue.
One you can’t reproduce.
One that crosses boundaries.

The symptom

You know something is wrong.
You don’t know why.
You don’t know where.

The fix

You stop treating observability as decoration.
You make it structural.
Context flows intentionally.
Decisions are traceable.

The insight:
AI can add instrumentation.
It can’t anticipate the future questions your system will force you to ask.


So what’s actually missing?

It’s not intelligence.
It’s not effort.
It’s not better prompts.

It’s that design intent has nowhere durable to live.

Right now:

  • The intent lives in your head
  • The code lives in the repo
  • The AI lives in between

Every generation is a translation.
Every translation loses information.

Humans compensate with experience.
AI can’t.


What experienced teams eventually learn

After enough pain, teams stop trying to explain everything.

They:

  • Reduce degrees of freedom
  • Encode decisions into structure
  • Make the right thing easy
  • Make the wrong thing hard

Not because they love constraints.
But because constraints scale.

This helps humans.
And it helps AI.

Because now the AI is operating inside a shape instead of inventing one.


This isn’t about rejecting AI

AI is a force multiplier.
Used well, it’s incredible.

But it needs something solid to work inside.
A system that resists drift.
A structure that encodes intent.
A foundation that doesn’t need to be re-explained every time.

Otherwise you’ll keep shipping code…
…and quietly fighting it.

That’s what this series is about.

Not how to write code.
But how to stop rewriting your intent.

Scroll to Top