AI

Why Agentic Coding? It's Not About the AI

Agentic coding isn't a smarter autocomplete. It's a workflow shift — and the structure you build around the agent is 90% of the value, even without the AI.

Why Agentic Coding? It's Not About the AI

Before diving in, make sure you’ve read Configure Your AI Coding Environment — that series sets up the tooling this one depends on. If you want the broader ecosystem view first, The Agentic Developer’s Field Guide has you covered.

You gave Claude a prompt, it wrote something reasonable. You ran it again on a similar task — same model, same project, same kind of request — and it wrote something completely different. Different approach, different naming, different abstraction. You thought: this model is inconsistent.

It isn’t. The model is doing exactly what models do: synthesising from context. The context changed between sessions, so the output changed. That’s not a model problem. It’s a context problem. And context is yours to control.


Copilot-style vs. agentic

These are genuinely different workflows, not points on a spectrum.

Copilot-style: Inline suggestions, one file at a time. You’re at the keyboard. The tool proposes, you approve each line, you move to the next file. The human remains the executor. The AI is an educated autocomplete.

Agentic: You hand off a task. The agent reads multiple files, writes multiple files, runs commands, creates branches, opens PRs. You review the result. The human becomes the reviewer. The AI is the executor.

That reversal changes what matters. With autocomplete, the quality of individual suggestions matters most. With agents, the quality of context matters most — what the agent knows before it starts, what constraints it operates inside, what it does when it hits an ambiguous decision.

DimensionCopilot-styleAgentic
Who executes?HumanAgent
Who reviews?Human (line by line)Human (PR / spec)
Quality leverSuggestion accuracyContext quality
ScaleOne file per sessionMany files per task
Interruption costLowHigh — context is lost

The agent’s three inputs

Every agentic session runs on exactly three sources:

  1. Rules documents — your CLAUDE.md, .claude/rules/*.md. Static. Deliberate. You wrote them.
  2. Files read at runtime — whatever the agent opens during the session. Dynamic. Changes as the codebase changes.
  3. Training priors — what the model learned before you ever opened the project. Immutable.

Here’s the uncomfortable truth: when your rules document contradicts the code the agent reads, the code wins. The nearest example in the working directory outweighs the abstract rule in the config file. “Never put business logic in controllers” in your CLAUDE.md loses to the three controller methods with business logic that the agent reads in the first five files.

The implication: rules documents are necessary but not sufficient. The codebase itself is the loudest signal. Structure — how you organise files, how you name branches, where you put specs — is a form of context you can control.


The real value: one developer, three features

The pitch for agentic coding is usually framed around speed. That’s not the right frame. Speed is a side effect. The actual value is parallelism.

With a disciplined agentic workflow, one developer can have three features genuinely in flight simultaneously:

  • Port 3042: an agent implementing an avatar upload feature in its own isolated directory, with its own database, on its own branch
  • Port 3015: you, debugging a login edge case
  • Port 3033: another agent extracting a billing module into a service object

You are not juggling three things at once mentally. You are managing three things at once structurally. The agent handles execution while you handle review and decision-making.

This is not magic. It is workflow design that happens to be powered by AI.


The honest trade-off

This workflow has an upfront cost. Before it pays off, you need:

  • A CLAUDE.md that accurately describes your project
  • Rules files in .claude/rules/ covering git workflow, architecture, test conventions
  • A habit of writing specs before asking the agent to implement
  • A local task tracker (more on this in Part 4)

That’s real investment. Probably two to four hours for a mature project. Ongoing maintenance as the project evolves.

The return: every new session starts oriented. Every agent output is anchored to a spec you approved. Every task has a named artifact with a status. You can hand off a task mid-day and come back to a PR without losing the thread.

The compounding effect matters too. The tenth session in a well-structured project costs you less cognitive overhead than the first session in a bare one. Agents that have been “here before” via their rules produce more consistent output. Your review time shrinks as your structure improves.


The four tools in this series

This series covers four things that make parallel development concrete:

  1. Git worktrees — branches as physical places, not checkout states. Each feature lives in its own directory with its own running server and its own database. Covered in Part 2.

  2. OpenSpec — a three-command workflow (/opsx:explore/opsx:propose/opsx:apply) that produces a machine-readable spec before any code is written. Covered in Part 3.

  3. Beads — a local-first task graph that tells you (and your agent) exactly what’s unblocked, in priority order, right now. Covered in Part 4.

  4. The agentic loop — how all four tools wire together into a repeatable 10-step workflow where the human does design and review, and the agent does implementation. Covered in Part 5.

You don’t need all four. Worktrees alone are a massive quality-of-life win even if you never touch an AI agent. OpenSpec alone catches misunderstandings before an hour of wrong code gets written. Beads alone removes the “what’s next?” decision that kills momentum after every interruption. The AI makes the investment pay back faster — it doesn’t create the value. The structure does.


Coming next

Part 2: Git Worktrees — Branches as Places, Not States covers the single most immediately useful tool in this series. One repo, three running apps, zero context switches. Git worktrees have existed since 2015. Almost nobody uses them. That’s about to change.

About the author

Prakash Poudel Sharma

Engineering Manager · Product Owner · Varicon

Engineering Manager at Varicon, leading the Onboarding squad as Product Owner. Eleven years of building software — first as a programmer, then as a founder, now sharpening the product craft from the inside of a focused team.

The Parallel Developer

5 parts in this series.

A five-part series on running multiple features in flight simultaneously using git worktrees, OpenSpec, Beads, and AI agents.

  1. 01 Why Agentic Coding? It's Not About the AI ← you are here
  2. 02 Git Worktrees: Branches as Places, Not States up next
  3. 03 OpenSpec: Write the Contract Before the Code
  4. 04 Beads: A Local-First Task Graph for Developers and AI Agents
  5. 05 AI Agents That Work: Give Them Structure, Not Just Prompts
Join the conversation 0 comments

What did you take away?

Thoughts, pushback, or a story of your own? Drop a reply below — I read every one.

Comments are powered by Disqus. By posting you agree to their terms.