Part of an ongoing series examining how AI is reshaping enterprise technology decisions.

There is a conversation unfolding in boardrooms across the country that feels both new and familiar. It typically begins with a sense of urgency and optimism: with the rise of AI-powered development tools, companies can finally build the software they have always needed. The assumption is that what once required large teams, long timelines, and outside partners can now be accomplished internally, faster and at lower cost. The conclusion often follows quickly: we should just build it ourselves.
The instinct is not wrong.
The answer to whether companies should build software to solve their most pressing operational problems is, in many cases, yes. What is often overlooked, however, is that this answer has not changed. It was just as true ten years ago as it is today.
Most organizations already have a version of this story buried somewhere in their operations. There is a known problem—often significant in both cost and impact—that has persisted for years. It may take the form of margin leakage caused by pricing systems that cannot keep pace with contractual complexity, or a rebate process that consumes significant human effort while still producing inconsistent results. In other cases, it is a quoting process so slow that opportunities are lost before they can be pursued effectively. At some point, someone within the organization recognized that software could address the issue. The economics were likely sound. The return on investment was clear.
And yet, the decision was made not to proceed.
Those decisions were rarely irrational. Competing priorities took precedence. ERP migrations, acquisitions, and internal initiatives consumed available resources. In many cases, the safer or more visible path won out. The result is that the same problem remains, now reframed in the context of new technological capabilities. With AI reducing the apparent cost of development, the conversation has resurfaced.
What is often missing is an honest assessment of why the work did not happen the first time.
The prevailing narrative suggests that AI has fundamentally changed the economics of software development. There is truth in this. Modern tools can generate code more quickly, assist with testing and documentation, and accelerate the early stages of development in meaningful ways. Prototypes that once took weeks can now be assembled in days. Well-defined features can be implemented with greater speed and efficiency.
These are real gains.

What has not changed is where most of the effort in software delivery actually resides. Writing code has never represented the majority of the work in complex enterprise environments. The greater challenge lies in defining what should be built, aligning stakeholders with competing perspectives, navigating inconsistencies in business processes, and reconciling fragmented data across systems. It involves determining which system serves as the source of truth, addressing longstanding data quality issues, and ensuring that new solutions are adopted by the people who must use them.
Beyond that, there are the demands of security, compliance, scalability, and long-term maintainability.
These are not marginal concerns. They represent the bulk of the work, and they have not become easier. If anything, they have grown more complex as systems have become more interconnected and expectations for performance have increased.
AI has made the mechanical act of writing code faster.
It has not meaningfully reduced the complexity of delivering reliable, scalable software within a real business environment.
This leads to a more difficult question—one that is often left unspoken. When an organization decides to build software internally, it is not simply choosing to write code. It is choosing to take ownership of architecture, security, data integrity, and system reliability. It is committing to maintaining and evolving that system over time. For organizations whose core competency is not software development, this represents a significant shift in responsibility.
At the same time, the tools that make development feel more accessible can introduce new risks. AI-generated code is often effective in the short term, but it can lack the architectural cohesion and long-term perspective required for sustainable systems. Teams may move quickly at the outset, only to encounter increasing friction as complexity grows and the underlying structure proves difficult to manage.
The distinction between generating code and delivering durable software becomes more apparent over time.
The gap is most visible in what might be called the “last mile” of software delivery. Early progress is often rapid. Demonstrations are compelling, and initial functionality aligns closely with expectations. It is in the later stages that the true complexity emerges. Business rules expand beyond initial assumptions. Legacy systems behave in ways that were not documented. Performance requirements exceed projections. Real users introduce edge cases that no specification fully anticipated.
This is where projects succeed or fail.

Addressing these realities requires experience, judgment, and a deep understanding of both technology and the business context in which it operates.
If AI has not fundamentally altered these dynamics, what has changed?
Two shifts are worth noting.
First, the economics of development have improved, but primarily when AI is applied within a disciplined delivery framework. Used effectively, these tools can compress timelines and reduce costs without sacrificing quality.
Second, the potential consequences of poor implementation have increased. The ease with which code can be generated has made it easier to build systems quickly—but not necessarily well. As a result, organizations may accumulate technical debt at a faster pace, creating challenges that are more difficult to unwind later.
Seen in this light, the central question is not whether to build software. For many organizations, that decision has already been made implicitly by the persistence of unresolved operational challenges.
The more important question is how that software will be built—and who is accountable for the outcome.
Organizations that succeed in this environment will not be distinguished by their ability to adopt AI tools alone. Those capabilities will become widely distributed. The differentiator will be the ability to combine the speed these tools provide with the domain expertise, delivery discipline, and architectural rigor required to translate code into reliable systems and, ultimately, into measurable business performance.
In that sense, the fundamentals have not changed as much as current discourse suggests.
What has changed is the pace—and the stakes.
This is where the conversation is headed.
If this resonates—or if you see it differently—it’s a conversation worth continuing. The most useful insights will come from technology and business leaders comparing notes in the open.