AI Does Not Code Poorly Link to heading

It Executes Poor Direction

There is a popular claim doing the rounds:

“AI codes worse than a junior developer.”

That has not been my experience.

My experience is uncomfortably familiar.

Working with an LLM feels like working with a room full of fast developers. They ship confidently. They rarely ask clarifying questions. They will happily build the wrong thing if you let them.

That is not a code quality problem.

It is a leadership problem.

The Junior Comparison Misses the Point Link to heading

The junior developer comparison sounds technical, but it hides a social failure.

Juniors do not lack ability. They lack experience and shared context.

They do not yet know:

  • What matters
  • What is risky
  • What is deliberately boring
  • What must not change

AI is the same.

How Good Teams Build Context Link to heading

In a good team, context is not accidental. It is built deliberately.

Through:

  • Stand-ups
  • Estimation
  • Design discussions
  • Code review
  • Shared ownership of decisions

This is not overhead. It is how people learn the shape of the system.

AI does not attend these sessions. It does not absorb intent through conversation. It does not pick up nuance through repetition.

Human context is reinforced constantly, often subconsciously, by working with other people.

AI gets none of this by default. It only sees the final instruction.

If the instruction lacks context, the output will too.

It will produce fluent, confident output even when the intent is vague, contradictory, or wrong.

That is not incompetence. That is obedience.

This mirrors a familiar failure mode.

It behaves like an offshore team fed only tickets. Context is stripped away. Decisions are implicit. Assumptions live in people’s heads, not in writing.

The output is then judged against an unwritten standard.

The problem is not capability. It is isolation.

AI Is an Intent Amplifier Link to heading

LLMs do not reason about outcomes. They amplify direction.

Clear intent gets clearer. Vague intent becomes confidently wrong. Conflicting intent turns into brittle systems.

This is exactly what happens in teams with weak technical leadership.

When no one owns the shape of the system, work still happens. Code still ships. Velocity still looks good.

Until it does not.

Prompts Are Just Compressed Leadership Link to heading

People talk about prompting as a new skill.

It is not.

A good prompt is:

  • A clear problem statement
  • Explicit constraints
  • Named trade-offs
  • Defined non-goals

That is just technical direction, compressed into text.

Hand-wavy prompts produce hand-wavy systems. Mixed goals produce mixed concerns. Avoided decisions get invented.

This should feel familiar.

Why This Makes People Uncomfortable Link to heading

Some of the loudest critics of AI code quality come from spaces that reward solo optimisation.

That is not a criticism. It is an observation.

A lot of online content optimises for individual speed, autonomy, and visible output. Talking to other people looks like friction. Context sharing looks like overhead.

Those skills make you very effective as a solo developer. They are less useful when the work involves directing others.

AI behaves much more like a team member than an extension of your hands. It needs clarity. It needs repetition. It needs decisions made explicit.

When that work is missing, the failure feels new.

It is not.

The output reflects the clarity of thought upstream.

That is confronting.

The Real Skill Gap Link to heading

The interesting question is not whether AI can code.

It clearly can.

The real gap is this:

Can you give clear technical direction without outsourcing thinking to the tool?

Many teams could not do this with humans either. AI just makes it obvious faster.

Where This Goes Next Link to heading

There is a leadership lesson hiding here.

If you struggle to work with other people, working with AI will not be easier.

This was a real step-change for me when I moved into leadership. I had to trust others to build things I could have built myself. I lost some fidelity by not doing everything solo. In return, I gained scale.

And, slightly annoyingly, this is the part people avoid admitting. Sometimes the team built better quality than I ever could alone.

AI offers the same trade. Scale. Speed. Even quality.

But it is not free. It costs clarity, context, and leadership effort.