Skip to content

On Intent

These days, the programming meta has a problem: everyone and their mother is slapping "intent" onto everything.

Makonea·Apr 22, 2026·17 min

Introduction

These days, when people talk about programming, Intent is the buzzword of the moment. As with every trending keyword, people are attaching "intent" to architectures (MVI), to programming methodologies (Intent-Driven Programming), and generally making a big fuss about how important it is.

The content itself is fine. We should value intent. We should understand the consumer's intent. We should grasp the end user's intent and incorporate it into our programming.

And so on — the rhetoric sounds truly grand.

The problem is that when you actually read through these posts, it becomes clear that "intent" is a multilayered concept.

We casually throw around the phrase "good abstraction," but when you actually sit down and write code, you repeatedly find that designs which looked clean at first gradually fall apart over time. This isn't simply a matter of mistakes or skill level. The problem is more fundamental: the process by which intent is translated into a real system passes through multiple layers of abstraction, and it is the information loss and context dependency that occur between those layers that cause the breakdown.

(Thomas Kuhn's The Structure of Scientific Revolutions)

A way of papering over the problem

Personally, I think the way this concept of Intent is used

is similar to Thomas Kuhn's concept of "paradigm."

The problem is that the word has been used far too broadly.

Kuhn himself, in The Structure of Scientific Revolutions, has been criticized for using the term "paradigm"

in more than 20 different senses.

(Masterman's classic critique)

In other words, this concept is less a precise theoretical term

and more a meta-concept whose meaning shifts depending on context.

As a result, in contemporary software discussions, the word "paradigm"

is often used not to explain something but to avoid explaining it.

The same pattern appears in the recently fashionable discourse around "Intent."

The concept of intent is likewise used without clear definition,

as a term that simultaneously points to meanings at multiple layers.

- User requirements

- Business goals

- Domain model

- Code-level intent

When all of these get bundled under the single word "intent,"

the problem is not simplified but rather obscured.

Therefore, instead of treating intent as a single unified concept,

we need to decompose it layer by layer.

From that perspective, Intent is not one thing;

it is a collection of multiple intents, each defined at a different level.

The examples above already illustrate this.

- User intent (UX level)

- Domain intent (business rules)

- System intent (architectural decisions)

- Implementation intent (code-level choices)

These share the same word,

but each operates under different constraints and optimization criteria.

At this point, one more important shift in perspective is needed. Rather than framing the problem as "developer skill" or "design completeness," we need to look at how intent itself gets translated into a system. Intent does not map directly to code. Several stages exist in between.

Intent
→ Domain Model
→ Abstraction (Architecture, Interfaces)
→ Code
→ Compilation
→ Execution Environment
→ Hardware

This process is not a simple conversion; at each stage, information is discarded and the remaining content is interpreted in a particular direction.

Information is lost when extracting core business logic from intent into a domain model, when deriving abstractions from that model, and again when writing code. The information lost at each step is not merely a matter of reduced precision; more importantly, it structurally narrows the options available later.

And

once an abstraction is chosen, it constrains every representation that follows.

Why Abstractions Always Break Down in Practice

If abstraction breakdowns were purely a matter of design quality, they could be solved by having a skilled developer do the design. But reality doesn't work that way. Even abstractions designed by seasoned developers fall apart over time. The reason is that the choice of abstraction itself structurally limits the options available afterward.

Case 1: Repository Pattern and Transaction Boundary

The intent behind the Repository Pattern is clear: separate data access logic from the domain layer. A by-the-book design looks like this.

public interface IOrderRepository
{
    Order GetById(int id);
    void Save(Order order);
}

public interface IInventoryRepository
{
    Inventory GetByProductId(int productId);
    void Save(Inventory inventory);
}

The service layer composes two Repositories together.

public class OrderService
{
    private readonly IOrderRepository orderRepo;
    private readonly IInventoryRepository inventoryRepo;

    public Result<OrderId, OrderError> PlaceOrder(PlaceOrderCommand command)
    {
        var inventory = inventoryRepo.GetByProductId(command.ProductId);

        if (inventory.Stock < command.Quantity)
            return Result.Failure(OrderError.OutOfStock);

        var order = Order.Create(command);
        inventory.Decrease(command.Quantity);

        orderRepo.Save(order);       // DB hit 1
        inventoryRepo.Save(inventory); // DB hit 2

        return Result.Success(order.Id);
    }
}

The design is clean. The problem is that if the process dies between orderRepo.Save and inventoryRepo.Save, the order has been created while the inventory remains unchanged.

What do you do when you need a transaction? The most common choice is to add an IUnitOfWork.

public interface IUnitOfWork
{
    void BeginTransaction();
    void Commit();
    void Rollback();
}

But the moment you do that, the abstraction starts to crack.

IUnitOfWork rests on the assumption that a specific database connection context is shared. The Repositories must share the same DbContext instance. In other words, the intent of "separating data access logic" is actually narrowed to "separating it, as long as the same connection is shared."

Migrating to a microservices architecture (MSA) makes this worse. If OrderRepository and InventoryRepository live in different services, a shared DbContext is simply impossible. At that point the options are the Saga Pattern or Two-Phase Commit (2PC), both of which represent a completely different conceptual model from the original abstraction design. The initial abstraction choice of "separate via Repository" structurally blocked the distributed transaction options that came later.

Where you draw the Repository boundary is a choice made at the intent level.

Yet that choice collides with the runtime constraint of the Transaction Boundary. These two layers of intent had different optimization criteria from the start. At design time, this collision is invisible.

Case 2: Event-Driven Design and the Demand for Synchronous Responses

The intent of Event-Driven Architecture is to reduce coupling.

Developers working in C# are already comfortable with event-driven coding, so it's not particularly difficult to separate inventory deduction, notification dispatch, and point accrual from the order creation flow by publishing events instead of making direct calls.

public class OrderCreatedEvent
{
    public int OrderId { get; init; }
    public int ProductId { get; init; }
    public int Quantity { get; init; }
    public DateTime OccurredAt { get; init; }
}

public class OrderService
{
    private readonly IEventBus eventBus;

    public OrderId PlaceOrder(PlaceOrderCommand command)
    {
        var order = Order.Create(command);
        // ... persist order

        eventBus.Publish(new OrderCreatedEvent
        {
            OrderId = order.Id,
            ProductId = command.ProductId,
            Quantity = command.Quantity,
            OccurredAt = DateTime.UtcNow
        });

        return order.Id;
    }
}

InventoryHandler, NotificationHandler, and PointHandler each subscribe to events. Coupling is low, and adding a new handler requires no changes to OrderService. The intent appears to have been cleanly realized.

About three months in, the boss gets in touch with a new requirement.

"When an order is completed, I need the inventory deduction result displayed on screen in real time. If inventory is insufficient, the order itself should be marked as failed."

This requirement breaks the fundamental premise of event-driven design. Events are fire-and-forget. The publisher has no knowledge of what subscribers do with them. To include the inventory deduction result in the response, a synchronous call is needed.

There are three options, and each carries a cost.

Option A: Add a synchronous inventory check before the event is published

public Result<OrderId, OrderError> PlaceOrder(PlaceOrderCommand command)
{
    var stockResult = inventoryService.CheckAndReserve(command.ProductId, command.Quantity);

    if (stockResult.IsFailure)
        return Result.Failure(OrderError.OutOfStock);

    var order = Order.Create(command);
    eventBus.Publish(new OrderCreatedEvent { ... });

    return Result.Success(order.Id);
}

Event-driven and direct-call styles become mixed. OrderService now has direct knowledge of InventoryService. The decoupling achieved by extracting things into events is reintroduced.

Option B: Extend the event bus with a Request/Reply Pattern

var reservationResult = await eventBus.Request<ReserveStockCommand, StockReservationResult>(
    new ReserveStockCommand { ProductId = command.ProductId, Quantity = command.Quantity }
);

The event bus must support synchronous replies, increasing infrastructure complexity. At that point it is closer to a message broker than an event bus. Using RabbitMQ's RPC pattern or MediatR's Request/Response structure, the design drifts further from the original event-driven intent.

Option C: Redesign from scratch

In practice, most teams go with Option A. The service then hardens into a state where event-driven and direct-call styles are mixed together.

The system-level intent of "reducing coupling" and the business-level intent of "receiving the inventory deduction result synchronously" were in conflict from the very beginning.

Yet this conflict was invisible at design time. The business requirement didn't change six months later; the conflict was latent from the start, and it only surfaced when the additional requirement was added.

The common structure across both cases is as follows.

Intent at design time (decoupling / separation of data access)
  -> Abstraction choice (Repository / EventBus)
  -> Constraint on future options
  -> New requirements collide with existing constraints
  -> Patch or full redesign

This doesn't happen because the design was bad. It happens because abstraction is context-dependent.

The context at design time and the context when requirements are added differ, and the abstraction cannot absorb that gap.

If a "good abstraction" exists, it is not a design that predicts every future context but a design that can absorb the friction cost when context changes.

And that standard cannot be defined at design time, and no programmer can solve everything.

What matters here is

that this problem is not the failure of any particular methodology.

Repository, EventBus, DDD,

all are attempts to structure the intent of a particular moment.

The problem is that once that structure is fixed,

there is less room for different intents to enter in the future.

(Karpathy's post where "vibe coding" was first used)

And it is precisely on this point that vibe coding makes the problem more extreme.

An LLM takes the intent revealed in the current prompt

and quickly generates a plausible abstraction,

but it barely considers how that abstraction constrains the future space of requirements.

This problem goes beyond simply "LLMs generate too much code."

More precisely, an LLM takes the intent given in the current prompt

and locally optimizes the structure around it.

That is, it assembles abstractions around the requirements visible right now,

the constraints spelled out in the current prompt,

and the separation criteria that seem plausible at this moment,

quickly assembling the abstraction.

The problem is that the intent of a real system doesn't end there.

Real-world intent always arrives with a delay.

Today's requirements are stated explicitly,

while tomorrow's requirements lie latent, not yet put into words.

A human developer cannot fully predict these latent requirements,

but at least knows from experience that more requirements will come.

So when establishing a structure, the developer

does not only ask "is this right for now?"

but also asks "will this break less badly later?"

An LLM, by contrast, largely treats the intent revealed in the current prompt

as the linguistic surface it has to satisfy.

As a result,

it generates structures that fit the current requirements well

but with a very high probability carry large future change costs.

This is one form of what is commonly called over-engineering.

What matters is that the over-engineering here

is not simply a matter of having too many classes or too many files.

The real problem is

that by over-structuring the current intent,

the paths through which other intents not yet arrived could enter

are actually narrowed.

In other words, the LLM creates abstractions.

But it barely senses how much those abstractions lock away future options.

In this sense, the LLM's tendency to sprawl may not be an accidental mistake but something almost inevitable.

An LLM does not bear the failure costs over the long term.

Whether the code will clash with a demanding customer's requirements six months from now,

which boundary will collide with transaction semantics,

which interface will become a bottleneck in a distributed environment,

that friction does not surface sufficiently in the linguistic layer at the time of generation.

Ultimately, an LLM takes the intent visible right now

and produces the most plausible-looking structure.

But from personal experience, a good structure

is not one that best expresses the current intent;

it is one that can absorb the friction cost

even when a different, future intent arrives.

And it is precisely this difference

that I believe represents the fundamental gap

between a human developer's design experience and an LLM's generative capability.

This is why code generated by an LLM often appears, on the surface, to be even better designed than code written by a human developer.

Interfaces are cleanly separated,

layers are clear,

names sound right,

and the patterns are familiar.

The problem is that the structure is not the result of resolving a context

but the result of recombining structures that appeared frequently in the training distribution.

In other words, the form of the structure is present,

but the history of friction that the structure was supposed to absorb is absent.

What humans learn through failure is not syntax but friction.

Where drawing a boundary causes transactions to tangle later,

which event separations eventually revert to synchronous calls,

which abstractions look elegant at first but become poison after just two requirement changes, these instincts come not from success stories

but from paying the cost of failure. An LLM has no memory of those failure costs.

Therefore, the problem is not that the LLM doesn't know the patterns.

On the contrary, because it knows the patterns too well,

it erases the friction of the contexts in which those patterns don't work.

In this sense, the dogmatization of methodology and the LLM's tendency to sprawl resemble each other.

Both share the impulse to reduce a living context to a fixed structure.

(Refactoring To Patterns)

It is perhaps impossible to avoid being patterns happy on the road to learning patterns. In fact, most of us learn by making mistakes. I've been patterns happy on more than one occasion.

The true joy of patterns comes from using them wisely. Refactoring helps us do that by focusing our attention on removing duplication, simplifying code, and making code communicate its intention. When patterns evolve into a system by means of refactoring, there is less chance of over-engineering with patterns. The better you get at refactoring, the more chance you'll have to find the joy of patterns.

Fixing Context

Humans fundamentally dislike complexity. So they discover "patterns" and try to simplify and explain complex systems.

The problem is that those simplified elements are always dependent on context and circumstance, yet people sometimes become so captivated by them as if they were the definitive answer. You can see this in books on pattern programming, where the authors themselves recount becoming deeply absorbed in the idea of pattern programming.

In that sense, the way Intent is used acts as a clean higher-level concept that papers over complexity by collapsing it into something simple.

Like TDD or pattern programming, people often fail to understand the time and context in which a methodology emerged,

and instead of grasping its essential implications, the methodology becomes dogma, used like a kind of religious doctrine that must be strictly followed.

Personally, I think TDD can be one of the worst methodologies.

If the abstraction is wrong, what meaning do the tests even have?

I've seen many cases where people chase passing tests while progressively destroying the architecture.

For example, the SOLID principles don't always need to be followed, yet in some organizations they are treated almost like religious doctrine.

Applying the Liskov Substitution Principle too rigidly in UI component design can actually hinder the diversity and flexibility of the UI.

In the end, what we call intent might simply be the ability to maintain flexibility depending on the situation and to find the optimal solution within that situation.

Ultimately, Intent is not a single concept.

Multiple intents coexist at the user level, domain level, system level, and implementation level, each with different constraints and optimization criteria, and they conflict with one another.

Good design is not about eliminating these conflicts.

It is about delaying when the conflict surfaces, or building a structure that can absorb the friction cost when it does surface.

That instinct does not come from knowing patterns. It comes from having experienced the contexts in which patterns fail.

Talking about intent is easy. Knowing what is lost each time intent passes through a layer is hard.

And knowing that is, ultimately, what I think my own goal is.