On AI and No Silver Bullet
Is AI a Silver Bullet?

People will often notice that coding YouTubers and instructors tend to say everything they want to say,
and then tack on the phrase "No Silver Bullet" at the end.
After laying out their full opinion, they append "No Silver Bullet" as a way to preemptively shut down any rebuttal or criticism.
This amounts to a strategy of deflecting accountability, essentially saying, "There's no magic solution anyway, so there's no point arguing further, and I take no responsibility for my claims." This is one way the phrase gets used,
and it raises the question:
where does this expression actually come from?
It comes from a paper by Frederick P. Brooks, Jr., a Turing Award laureate.
The paper's central message is simple:
there is no silver bullet in software engineering.

So why a silver bullet, of all things?
The paper draws on folklore, noting that nothing is as terrifying as a werewolf, because a werewolf transforms from a familiar being into an alien horror,
and that a silver bullet is the only thing that can bring one down.
Brooks uses this as an analogy: a software project may seem harmless under normal circumstances, but when schedules slip and budgets overrun, it can turn into a terrifying monster of a defective product.
The paper then asks whether there exists a silver bullet that could make software development easier, and its conclusion is clear:
Brooks states unequivocally that, from 1986 onward, no advancement would yield a tenfold or greater improvement in software productivity within a decade.
He further argues that software development is, by its very nature, incapable of achieving such rapid progress.
In 2025, how should we receive this paper?

The paper identifies two categories of difficulty that make it hard to achieve more than a tenfold increase in software development productivity.
Essential Difficulty and Accidental Difficulty.
Essential Difficulty arises from the inherent nature of software itself,
and the paper presents four types of complexity that cannot be fundamentally eliminated.
- Complexity: Systems contain enormous intrinsic complexity. A system made up of hundreds of thousands of lines of code carries a level of complexity that exceeds human comprehension and makes maintenance difficult. Factor in interactions between components, exception handling, and state combinations, and that complexity grows exponentially.
- Conformity: Software must be developed to fit the complex requirements and constraints of the real world, meaning it must "conform" to a variety of external environments (hardware, operating systems, networks, and so on), all of which are interdependent and subject to change.
- Changeability: Software is far easier to modify and update than hardware or other engineered products, yet paradoxically this very flexibility creates difficulty. User requirements shift endlessly, and software must continuously evolve to keep pace with technological progress.
- Invisibility: Software is an abstract entity that cannot be seen, and unlike architecture or engineering drawings, it cannot be understood intuitively through a physical model, which gives rise to its own set of problems.

Here is the original passage from the paper that best explains the source of these essential difficulties:
The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence. For three centuries, mathematics and the physical sciences made great strides by constructing simplified models of complex phenomena, deriving properties from those models, and verifying those properties by experiment. This paradigm worked because the complexities ignored in the models were not the essential properties of the phenomena. It does not work when the complexities are the essence.
Accidental Difficulty refers to the difficulties that arise in the software development process from inadequate tools, technologies, or methodologies.
Back in 1986,
the paper discussed several candidates for a silver bullet that might eliminate these accidental difficulties:
1) High-level languages
2) Object-Oriented Programming (OOP): breaking complex problems into smaller object units, each designed to operate independently
3) Incremental Development
These were proposed as means of reducing accidental difficulties.

High-level languages have indeed become mainstream, OOP has become the dominant coding style, and automation tools have reduced repetitive work, yet creative problem solving still requires human beings.
Incremental Development returned in the form of Agile, and even as technology advanced, complexity remains with us and software engineering is still hard.
Accidental complexity can be addressed through technological progress, but essential complexity persists.
For that reason, one might argue that software development cannot see dramatic productivity gains, and that this holds true even at this point in 2025,
but people will ask:
Does that still hold in the age of AI?

In fact, Brooks himself mentioned AI as a silver bullet candidate back then.
His paper reveals the perception of AI at the time, and Brooks takes a skeptical stance toward AI as a future silver bullet candidate.
The paper cites Parnas's definition to divide AI into two perspectives.
AI-1 (Artificial Intelligence as Human Intelligence Replacement): "The use of computers to solve problems that previously required the application of human intelligence" - a definition focused on the attempt to replicate human cognitive capabilities in computers.
For example, when a computer performs tasks such as problem solving, decision making, and learning that were once considered exclusively human, that is what AI-1 refers to.
AI-2 (Heuristic Programming): "The use of a specific set of programming techniques known as heuristic or rule-based programming" - an approach that encodes expert knowledge and experience as rules or heuristics to solve problems.
The Expert System is the representative example here,
with the goal of mimicking the problem-solving strategies used by domain experts.
Brooks is skeptical of both.
On AI-1: he criticizes the definition of AI-1 as too subjective and fluid.
Indeed, the fact that the definition of what we now call AGI keeps shifting from scholar to scholar in 2025 is one illustration of exactly that.
On AI-2: even if such a system could mimic a domain expert, it still cannot reduce all of the essential complexity.
Today's 2025 models encompass both AI-1 and AI-2, yet Brooks's assessment remains valid.

AI still cannot build entire software systems on its own, and beyond a certain volume of text it begins to produce structural problems.
As most prominent developers today have noted,
AI can be used as a tool to boost developer productivity, but it still cannot fully replace human developers.
This is because the success of a software project still depends on human capabilities such as clearly defining requirements and designing architecture.
AI still cannot fully and precisely understand human requirements and produce exactly what is wanted.
AI-based programming also carries a black-box characteristic, where the internals are opaque and operation involves adjusting parameters rather than fully understanding underlying principles,
which adds yet another layer of complexity to AI's own outputs.
In 2025, AI technology continues to advance at a remarkable pace, yet the No Silver Bullet paper remains as relevant as ever.
AI is a powerful tool for improving software development productivity,
but it cannot serve as a "cure-all" that resolves the essential complexity of software.
The No Silver Bullet paper teaches us to let go of illusions and face reality squarely,
and to keep walking the path of software engineering through continuous effort and incremental improvement.
In 2025, the message of No Silver Bullet hits closer to home than ever.
2025.
No Silver Bullet.
Still valid.
And even in 2026, when this post was revised and agentic coding had arrived, it is still valid.