AI-First Software: Engineering the Post-Interface Era
Generative AI has solved language. That's enough to change software design—not in the future, but right now.
This essay has been revised to reflect how my thinking has developed since its original publication. The version here is authoritative.
Revision note (February 2026): This essay has been updated to clarify how AI-First interface design relates to EKA governance requirements. See Update for February 2026 and Answers to Critics for the full discussion.
“These systems have unequivocally solved language. They possess what experts call formal competence.” — Andrej Karpathy
That statement deserves to sit with you for a moment. Not because it’s hyperbole—it isn’t—but because its implications are so easy to miss. We’re so busy debating whether generative AI will lead to AGI that we’re ignoring the revolution already underway.
Whether generative AI achieves artificial general intelligence is immaterial. The fact that AI can interpret and act on natural language—reliably, at scale—is sufficient to change how we design software. We’re not waiting on future capability. This is a design and engineering problem, not a technology challenge. The building blocks exist. What remains is imagination.
The Limits of Traditional Software
Every piece of software you’ve ever used shares a core constraint: it’s limited by the imagination and technical ability of its design team. This isn’t a criticism of developers. It’s a structural reality. When we build traditional software, we’re essentially trying to anticipate every possible user need and encode responses to each one. Users are confined to the interface and functions envisioned by the people who built it.
Think about that. Billions of people, each with unique contexts, goals, and ways of thinking, all funneled through the same predetermined buttons and fields and workflows. We’ve accepted this as the nature of software. It isn’t. It’s just the best we could do before now.
If we create software that understands us—actually understands us—that anticipates our needs and evolves with us, we redefine the experience of using technology.
I’m proposing an AI-first approach to software: solutions built on generative AI as foundation, not as a feature bolted onto traditional architecture.
Rethinking the Interface
User interfaces based on predetermined fields and buttons operate on an assumption: that software cannot know what you want. You must translate your intent into the vocabulary of the system. Click here. Enter this. Select from these options. The cognitive burden sits with the user.
Generative AI’s ability to understand natural language loosens this constraint. It permits unconstrained interaction. “What do you want to do?” becomes a legitimate entry point—not a search box that matches keywords, but actual comprehension of intent.
But even that undersells the opportunity. A better entry point isn’t a question at all. It’s a statement: “It’s Tuesday, and based on your goals, you need to see this and take this action.” The system that knows you, that has context about your work and your patterns and your objectives, doesn’t wait for you to ask. It anticipates. It presents what matters before you know to look for it.
This is engineering, not speculation.
Dynamic Functionality
Traditional software can only do what it’s programmed to do. Every capability must be specified, implemented, tested, deployed. Adding new functionality means development cycles, release schedules, update processes. The software is frozen at the moment of its creation, thawed only through deliberate effort.
An AI-first system operates differently. It can dynamically select algorithms and approaches based on user-expressed requirements in natural language. Need statistical analysis? The system selects appropriate methods based on your data and your question, not based on what a product manager decided to include in version 2.3.
The functionality becomes fluid. The system’s capabilities expand beyond what was explicitly built into it. The user describes outcomes; the system determines approach—within the scope the user has authorized. For consequential work, this means the user has already decided to route through executable, verifiable artifacts. The system selects implementation details, not whether to produce inspectable methodology. [See Answers to Critics for how this reconciles with EKA.]
Generative Results
Traditional software outputs what it was designed to deliver. Reports look like the report template. Dashboards display the metrics someone decided should be displayed. The format is fixed; only the data changes.
An AI-first system provides outputs desired by the user in the fashion requested—by generating them from the underlying data. Want a summary? A detailed analysis? A comparison with last quarter? A visualization that highlights a specific trend? The output is synthesized on demand, shaped by need rather than by predetermined templates.
This is the difference between a vending machine and a chef. One dispenses pre-packaged options. The other creates.
The Data Is the System
Think about what this means for enterprise computing. Today, organizations maintain vast, complex software portfolios. ERP systems, CRM platforms, analytics tools, reporting infrastructure—layer upon layer of applications, each with its own logic, its own interface, its own maintenance burden.
In an AI-first approach, the focus shifts. Instead of complex software applications, organizations concentrate on creating rich, interconnected data assets. The AI becomes the interface to all of it—parsing user intent, identifying relevant data sources, selecting appropriate techniques, then dynamically generating insights.
The application layer, in the traditional sense, faces pressure. Data and AI remain. The rest was intermediary.
On Hallucination and Control
The objection that arrives at this point is predictable: what about hallucinations? What about accuracy? What about reliability?
These are legitimate engineering concerns, not fundamental barriers.
Hallucinations are a manageable risk. We can engineer around current limitations. Retrieval-augmented generation, confidence scoring, human-in-the-loop verification for high-stakes decisions, structured output validation—the techniques exist and are improving rapidly. What a solution includes as controls and protections is a function of design and engineering choices, not technological impossibility.
We don’t reject automobiles because they can crash. We engineer safety systems. The same principle applies here.
Now, Not Later
There’s a tendency in technology discourse to frame transformative capabilities as perpetually five years away. Artificial general intelligence might be. Practical AI-first software is not.
The formal competence—the ability to interpret and act on natural language reliably—is here. The language understanding is here. The ability to reason about user intent and generate appropriate responses is here. What’s missing is the willingness to redesign from first principles rather than simply augmenting existing approaches.
Software as we’ve known it—rigid, predetermined, one-size-fits-all—is changing. What emerges will be more adaptive, more personal, more useful. The interface recedes. The data persists. The AI mediates.
Software is dead. Long live software.
Every day that passes, someone is building this way. The organizations that grasp what’s possible will outpace those still debugging their chatbot integrations.
→ For the mechanism behind this shift, see Agentic AI as Universal Interface → For what this means as capabilities improve, see Automating Expertise Gets Easier and Easier