The Universal Interface Thesis
Code generation isn't just for knowledge work. It's the architecture for human interaction with any control surface—shop floors, building systems, medical devices, enterprise software. The implications restructure how we think about training, expertise, and vendor lock-in.
Agentic AI as Universal Interface argued that code generation removes the barriers between domain expertise and system access. That essay focused on knowledge work—software, analysis, business logic.
The thesis is broader than that.
Code generation as interface applies wherever a control surface exists. Manufacturing equipment. Building automation. Medical devices. Network infrastructure. Enterprise software. The pattern doesn’t require a keyboard and a desk. It requires a system that accepts commands and a user who knows what outcome they want.
The Current Model
Every piece of equipment with digital controls has spawned a bespoke interface. The equipment manufacturer—or a third party—designed screens, menus, and workflows based on their assumptions about what users need.
Users learn the interface. This takes training. The training is specific to that interface. When new equipment arrives with a different interface, users need new training.
The math: N machines × M users = N training curricula that M users must absorb. Adding machine N+1 means training everyone again.
This model made sense when the interface was the only way to bridge human intent and machine action. The designer’s assumptions, frozen in the UI, were better than no bridge at all.
But the assumptions are always wrong for someone. The interface that works for the common case frustrates the edge case. The workflow the designer imagined doesn’t match how work actually happens. Users adapt to the interface rather than the interface adapting to users.
The Alternative
The interface layer becomes:
- A control surface: API, command set, protocol. The machine accepts instructions in some structured form.
- Validated scripts: Pre-tested translations from parameterized intents to safe command sequences.
- A conversational layer: Code generation that understands user intent and selects/parameterizes appropriate scripts.
The user describes what they want to accomplish. The system asks clarifying questions. Once intent is clear, validated scripts translate that intent to machine-specific commands.
Adding machine N+1 means creating validated scripts for its control surface—once. Users who understand the domain can operate it immediately through the same conversational interface they use for everything else.
Domain Examples
Shop Floor Control
Current model: Each CNC machine, robot cell, or processing station has its own control panel and software. Operators learn each system’s specific interface. A skilled machinist might know three different control paradigms for three different machines. When new equipment arrives, even experienced operators need training on its particular UI.
Alternative: The machinist describes intent: “Cut this aluminum stock to these dimensions with this tolerance.” The system asks: “What tooling? What feed rate? Coolant?” Once parameters are specified, a validated script generates the G-code or proprietary commands for the specific machine.
The machinist’s expertise is in machining—materials, tolerances, tooling selection, process optimization. That expertise transfers across any machine whose control surface has been mapped. New equipment means new script validation, not new operator training.
Safety architecture: Scripts include bounds checking. “Feed rate of 800 exceeds recommended maximum of 500 for this material/tool combination. Proceed with override, or adjust to 500?” Spindle limits, travel limits, collision detection—all encoded in the validation layer, not dependent on the operator remembering each machine’s constraints.
Building Automation
Current model: Each building management system vendor has a proprietary interface. Facility managers learn Honeywell, or Johnson Controls, or Siemens, or whatever the building has. Multi-building portfolios mean multiple interfaces. Optimizing across systems requires specialized integrators.
Alternative: The facility manager describes intent: “Reduce energy consumption in Building A’s east wing during unoccupied hours while maintaining minimum ventilation.” The system asks: “Define unoccupied hours? Minimum temperature bounds? Air quality thresholds?”
Validated scripts translate these parameters to vendor-specific commands—BACnet writes, proprietary API calls, whatever the equipment requires. The facility manager’s expertise is in building operations, occupant comfort, energy optimization. The translation to specific actuator commands is handled.
Safety architecture: Scripts enforce equipment constraints. Chillers don’t cycle too frequently. Dampers don’t close completely when occupancy is detected. Override requests surface explicitly: “This would reduce ventilation below code minimum. Confirm exception for maintenance period?”
Medical Devices
Current model: Each infusion pump, ventilator, or monitoring system has its own interface. Clinicians learn device-specific workflows. Alarm fatigue partly stems from inconsistent interfaces—each device presents information differently. Training time competes with patient care time.
Alternative: The clinician describes clinical intent: “Administer 500ml normal saline over 4 hours with KVO after completion.” The system confirms: “Patient weight? Verify no fluid restriction? Line access confirmed?”
Validated scripts translate to device-specific programming. The clinician’s expertise is in patient care—assessment, decision-making, monitoring. The translation to specific device commands is handled.
Safety architecture: This domain requires the most rigorous validation. Scripts enforce hard limits—no flow rates outside device capabilities, no concentrations outside safe ranges, mandatory confirmation for high-alert medications. Every parameter checked against patient data where available. The conversation creates an audit trail of clinical reasoning that current UIs don’t capture.
Network Infrastructure
Current model: Each vendor has a CLI or GUI. Cisco IOS, Juniper Junos, Arista EOS—similar concepts, different syntax. Network engineers memorize command variations. Multi-vendor environments require polyglot expertise or expensive standardization.
Alternative: The network engineer describes topology intent: “Create a VLAN for the finance department, spanning switches A through D, with gateway on the core router, DHCP from the Windows server.” The system asks: “VLAN ID? Subnet? Any ACL requirements?”
Validated scripts generate vendor-specific configurations. The engineer’s expertise is in network architecture—topology design, security boundaries, traffic management. The translation to specific CLI commands is handled.
Safety architecture: Scripts validate before applying. “This would create an IP conflict with existing subnet. Adjust or override?” Rollback procedures embedded. Change windows enforced.
Enterprise Software
Current model: SAP, Oracle, Salesforce—each has navigation paradigms users must learn. Power users memorize transaction codes and shortcuts. Casual users struggle with interfaces designed for different workflows. Training is ongoing as interfaces update.
Alternative: The user describes business intent: “Create a purchase order for 500 units of part XYZ from our preferred vendor, delivery next week.” The system asks: “Cost center? Approval workflow? Expedited shipping?”
Validated scripts generate the transactions. The user’s expertise is in their business process—what needs to happen and why. The translation to specific system navigation is handled.
Safety architecture: Scripts enforce business rules. Approval thresholds trigger routing. Budget checks validate before commitment. Segregation of duties maintained. The conversation captures business reasoning that transaction logs miss.
The Common Structure
Each domain shares the same architecture:
| Component | Shop Floor | Building | Medical | Network | Enterprise |
|---|---|---|---|---|---|
| Control surface | G-code, proprietary | BACnet, APIs | Device protocols | CLI, NETCONF | Transaction APIs |
| Domain expertise | Machining, materials | HVAC, energy | Clinical judgment | Network design | Business process |
| Validated scripts | Machine-specific | Vendor-specific | Device-specific | Vendor-specific | System-specific |
| Safety layer | Bounds, collision | Equipment limits | Clinical limits | Config validation | Business rules |
The human provides domain expertise—what they want to accomplish and why. The AI provides translation—selecting and parameterizing validated scripts. The scripts provide safety—encoding expert knowledge about what commands are valid under what conditions.
This is assistant pattern generalized beyond knowledge work. The expert (script creator) specifies methodology. The artifact (validated script) encodes it. The user invokes it through conversation. The AI translates intent to invocation. Verification happens at the script layer, not at runtime.
Training Economics Invert
Current model: Training cost scales with equipment diversity. Add a new machine, train everyone who might use it. Add a new software system, train everyone who needs access. Training departments exist to manage this ongoing burden.
Alternative model: Training focuses on domain expertise and intent articulation. “What is this equipment capable of? What outcomes do you need? What parameters matter?” The translation to specific controls is handled.
Adding new equipment requires:
- Mapping its control surface (one-time engineering effort)
- Creating validated scripts (one-time domain expert effort)
- Distributing scripts to the code generation layer (trivial)
It does not require retraining users who already understand the domain.
The training investment shifts from “how to operate this interface” to “how to think about this domain.” The latter is more durable—domain principles change slower than vendor UIs—and more transferable—domain expertise works across any equipment whose control surface is mapped.
Vendor Lock-in Erodes
Current lock-in mechanisms:
- Users know this vendor’s interface → switching means retraining
- Integrations are built to this vendor’s APIs → switching means redevelopment
- Data is in this vendor’s format → switching means migration
When the interface is conversational and equipment-specific commands are generated on demand, these mechanisms weaken:
- Users interact through conversation → interface knowledge doesn’t create lock-in
- Scripts abstract vendor APIs → switching means new scripts, not new architecture
- Data can be accessed through generated queries → format matters less
Equipment competes on capability, reliability, and cost—not on interface familiarity or integration convenience. This is healthier competition.
The Skill Shift
Current differentiation: “Can you operate this machine?” = “Do you know this interface?”
Future differentiation: “Can you operate this machine?” = “Do you understand what this machine does, what it’s capable of, what outcomes you need, and can you articulate that clearly?”
Interface proficiency becomes table stakes. Everyone has access to the same conversational layer. The differentiator is domain expertise—understanding materials, processes, constraints, trade-offs, and being able to articulate intent precisely.
This is a return to domain expertise as primary qualification. The machinist who deeply understands cutting processes matters more than the machinist who memorized one vendor’s control panel. The facility manager who understands building physics matters more than the one who memorized one BMS vendor’s menu structure.
What This Requires
For any domain, implementing this pattern requires:
Control surface documentation: What commands does the equipment accept? What parameters? What responses?
Script creation: Domain experts create validated scripts that translate parameterized intents to safe command sequences. This is the methodology encoding step from EKA.
Safety validation: Scripts are tested against edge cases, bounds conditions, and failure modes. Safety constraints are embedded in the scripts, not dependent on user knowledge.
Distribution mechanism: Scripts are made available to the code generation layer. New equipment means new scripts deployed to all users simultaneously.
Feedback loop: When scripts fail or produce unexpected results, experts refine them. The library improves over time.
The upfront investment is script creation and validation. The ongoing benefit is elimination of per-user, per-equipment training and the accumulation of validated methodology that scales without limit.
The Limiting Cases
Where does this not apply?
Truly novel situations: If no one has mapped the intent-to-commands pathway, the system can’t help. But novelty is rare in most operational contexts, and each novel solution can be scripted for future use.
Real-time feedback loops: If operators must respond to continuous sensory feedback faster than conversation allows—certain types of manual machining, emergency response, performance driving—direct control remains necessary. But even here, setup and configuration can be conversational; only the execution requires direct interaction.
Irreducibly tacit judgment: Some decisions depend on factors that resist articulation. But tacit space is shrinking, and much of what seems irreducibly tacit turns out to be representationally blocked—articulable with the right interlocutor.
These limiting cases define the boundary, not the center. Most operational work—routine, repeated, following known patterns—falls well within where this model applies.
Implications
The thesis generalizes beyond any single domain:
For equipment manufacturers: The value shifts from interface design to control surface quality. Expose clean APIs, document thoroughly, enable integration. Proprietary interfaces become liabilities, not assets.
For software vendors: The application layer faces pressure. If transactions can be generated through conversation, what exactly is the product? Data and business logic remain valuable. Navigation paradigms become optional.
For training organizations: Interface training loses value. Domain training—principles, trade-offs, judgment—gains value. The curriculum inverts.
For organizations: Capital equipment decisions change. “Our people know this vendor’s system” stops being a switching cost. Equipment competes on capability. Integration costs drop. The constraint becomes script validation capacity, not training capacity.
For workers: Interface proficiency depreciates. Domain expertise appreciates. The machinist who understands cutting theory adapts to any machine. The machinist who only knows one control panel is vulnerable.
This is the return of domain expertise as primary qualification, extended from knowledge work to operational work. The secretary’s typing skill became irrelevant when executives could type. The interface operator’s navigation skill becomes irrelevant when conversation generates commands.
What remains valuable is knowing what should happen and why.