Agent-Relative Tacitness: A New Framework
Rethinking what makes knowledge tacit or explicit, and why the answer depends on who is doing the knowing.
This essay has been revised to integrate later developments in my thinking, including the foundational work on knowledge-as-capability that it now references. The version here is authoritative.
Your dog knows things it cannot tell you. The boundaries of its territory, the reliable routes through the neighborhood, the spots where interesting smells accumulate. This knowledge lives in the dog’s behavior. There is no statement the dog could make about it. Michael Polanyi, the philosopher who gave us tacit knowledge as a concept—knowledge we possess but cannot fully articulate—used examples like this to illustrate a basic epistemological puzzle: we know more than we can say.
But here is the question that has gone underexplored: tacit relative to whom?
Your dog cannot articulate its territorial knowledge. A biologist studying animal cognition could. She could map the territory, identify the scent markers, describe the behavioral patterns, and produce a comprehensive account of what the dog “knows.” The knowledge that was locked inside the dog’s practice is now written in a research paper.
This observation leads to a reconceptualization of tacit knowledge that matters. Especially as we enter an era of increasingly capable artificial agents.
This framework depends on a foundational claim I develop elsewhere: that knowledge is capability to produce outcomes, not justified true belief. Once we define knowledge operationally rather than epistemologically, tacitness becomes a question about transfer relationships between agents. It stops being an intrinsic property of knowledge itself.
The Standard View and Its Limits
The traditional understanding of tacit knowledge treats tacitness as if it were a property of the knowledge itself. Some knowledge, on this view, is simply inarticulable. The expert’s intuition, the craftsman’s feel for materials, the native speaker’s grammatical sense. These are held up as typical cases of knowledge that resists explicit formulation.
But this framing is quietly anthropocentric. When we say knowledge is tacit, we typically mean tacit relative to human expressive capabilities. We treat the boundaries of human articulation as if they were the boundaries of articulation as such.
That assumption deserves scrutiny.
Tacitness Is Agent-Relative
The core insight is simple but has real consequences: whether knowledge is tacit or explicit depends on the capabilities of the agent or agents in question, not on intrinsic properties of the knowledge itself.
An expert radiologist examines a scan and announces, “Something is wrong here,” before she can specify what. Her diagnostic knowledge is operating tacitly. She cannot, in that moment, articulate the features that triggered her judgment. But this tacitness is contingent, not absolute. With effort, she might introspect and identify the relevant patterns. A research team might study expert radiologists and codify the diagnostic heuristics. An AI system trained on thousands of scans might learn to highlight the specific anomalies that human experts detect but struggle to name.
The knowledge was tacit relative to the radiologist in that moment. It need not be tacit relative to a sufficiently capable system of agents working together.
The Encoding-Decoding Asymmetry
One of the more interesting features of this framework emerges when we examine how knowledge moves between tacit and explicit forms. Agent capabilities have internal structure that we often overlook.
Consider the capacity to encode knowledge: to take something known and render it into a representation. Now think about the capacity to decode: to take a representation and recover operative knowledge from it. These are separable abilities. They need not reside in the same agent.
A poet encodes experiences and perceptions into verse. The resulting poem is a representation, but the poet herself may be unable to decode it propositionally. She cannot explain in plain prose exactly what the poem conveys or how it works its effects. Meanwhile, a literary critic may have the opposite profile: tremendous decoding ability, capable of articulating what poems mean and how they achieve their effects, but unable to write poetry of his own.
Knowledge becomes explicit when a complete representational path exists. That means a viable pipeline from tacit knowledge through encoding, representation, and decoding back to operative understanding. But the encoder and decoder need not be the same agent. The path can run through multiple agents with complementary capabilities.
The Shrinking Domain of the Tacit
This framework suggests that tacit territory is not fixed. It contracts as agent capabilities expand.
When we had only human capabilities to work with, vast domains of knowledge remained tacit. The expert’s intuition, the artist’s sensibility, the body’s procedural knowledge of skilled movement. These seemed permanently resistant to articulation.
But capabilities accumulate. Scientific instruments extend our perceptual access. Research methodologies let us study implicit processes. And now, artificial agents are developing encoding and decoding capabilities that differ from human abilities. In some domains, they exceed them.
An AI system might detect patterns in expert behavior that the expert cannot introspect. It might encode those patterns into representations that other systems, or humans aided by those systems, can decode. Knowledge that was tacit relative to unaided human capability becomes explicit relative to the extended system of human and artificial agents working together.
This is not mere speculation. It describes work already underway in medical diagnostics, materials science, and other fields where machine learning systems are beginning to articulate what human experts know but cannot say.
The Residual Question
If tacit territory shrinks as capabilities improve, an important question remains: is there a floor?
Is there knowledge that is necessarily tacit? Knowledge for which no representational path could exist regardless of how capable our agents become? Or does all tacitness reduce, in principle, to limitations in our current encoding and decoding capabilities?
This is a question at the boundary of epistemology and metaphysics. It asks whether there are hard limits on what representation can capture, or whether representation can in principle reach everywhere that knowledge does, given sufficient ingenuity in its design and sufficient capability in its interpretation.
The framework of agent-relative tacitness gives us a sharper way to ask the question. Humans clearly cannot articulate everything. What matters is whether there exists any possible agent, or system of agents, capable of completing the representational path for any given piece of knowledge.
Implications for a Hybrid Future
We are entering a period in which human and artificial agents will increasingly work together, combining complementary capabilities. The framework of agent-relative tacitness suggests that one of the most important functions of this collaboration will be the progressive explication of knowledge that was tacit relative to humans alone.
This is not just a technical development. It has real implications for how we understand expertise, how we transmit knowledge across generations, and how we make decisions in domains that have traditionally relied on inarticulate judgment.
The dog cannot tell you what it knows about its territory. But the biologist can. The radiologist cannot always say what she sees. But perhaps the combined system of radiologist, researcher, and AI will be able to.
What was tacit becomes explicit. Not because the knowledge changes, but because our collective capacity for representation expands.
The boundaries of the sayable are not fixed. They move with us as we develop new ways of encoding and decoding what we know.