Tacit Space Shrinkage: When AI Articulates the Inexpressible
As AI capabilities expand, the domain of knowledge that organizations cannot express or operationalize is shrinking, with real implications for expertise and governance.
This essay has been revised to integrate later developments in my thinking, including the foundational claims it now references. The version here is authoritative.
Every organization has knowledge that nobody can quite put into words. The machinist who “just knows” when a tolerance is off. The sales director who senses when a deal is going sideways. The nurse who recognizes deterioration before the monitors do. We call this tacit knowledge, and for decades we’ve treated it as a fixed feature of organizational life. Valuable, irreplaceable, and resistant to capture.
That assumption is becoming obsolete.
As AI capabilities expand, the domain of practically ineffable knowledge is shrinking. Not because tacit knowledge was never real. Much of what we labeled “tacit” was actually something else: knowledge blocked by representational limits rather than knowledge that actually resists representation.
This distinction matters significantly for how organizations think about expertise, training, and governance.
This argument builds on two foundational claims. First, knowledge is capability to produce outcomes. That’s an operational definition that sidesteps epistemological questions about belief and justification. Second, tacitness is agent-relative. It’s a property of transfer relationships between agents, not an intrinsic feature of knowledge itself. Together, these foundations make “tacit domain shrinkage” coherent: as AI capabilities expand, transfer paths open that didn’t exist before.
This framing differs from Nonaka’s influential SECI model, which treats externalization as converting tacit knowledge to explicit form—a change in the knowledge itself. But if tacitness is agent-relative, there’s nothing to convert. What changes isn’t the knowledge; it’s which agents can access it. “Tacit space shrinkage” describes this boundary shift: knowledge that was tacit-to-humans becomes accessible to AI systems without ever becoming “explicit” in the SECI sense. The spiral model assumed human-to-human knowledge transfer. AI breaks that assumption by introducing agents with different representational and perceptual capabilities. The knowledge doesn’t change state; the population of agents who can work with it expands.
Two Kinds of Tacitness
Michael Polanyi, who gave us the concept of tacit knowledge, was pointing at something real: we know more than we can tell. The cyclist cannot fully articulate how she balances. The chess master cannot exhaustively explain his positional intuition. Some knowledge resists explicit formulation.
But look closer at organizational “tacit knowledge” and you’ll find two distinct phenomena mixed together.
Structurally tacit knowledge is inseparable from embodied skills, perceptions, and value commitments. It functions as background for explicit statements to make sense. Attempts to spell out every element fail or degrade the performance. This is Polanyi’s tacit dimension in its purest form. Knowledge that actually resists articulation because articulation would destroy the integrated whole that makes it work.
Representationally tacit knowledge could be expressed in some representational system without destroying its usefulness. It just hasn’t been, because the knower lacks the vocabulary or conceptual repertoire to articulate it. The process engineer who “just knows” when a system is drifting may lack the control theory vocabulary to explain what she’s detecting. Her knowledge feels ineffable to her, but it’s not ineffable in principle. Give her the right concepts, or the right interlocutor, and she can articulate it.
The real insight: much organizational “tacit knowledge” is representationally tacit, not structurally tacit. It’s unexpressed because the knower lacks the words, not because the knowledge resists words.
LLMs as Expressive Prosthetics
Large language models have a capability that matters here: they can serve as expressive prosthetics for human experts.
Think about what happens when an expert works with an LLM to articulate knowledge they’ve previously experienced as inexpressible:
- The model proposes candidate formulations when the expert “knows what they mean but cannot say it”
- It offers terminology and distinctions from other domains that might capture what the expert is trying to express
- It suggests ways to structure process descriptions or narratives
- It generates examples and counterexamples that help sharpen explanations
This directly addresses the expressive-deficit problem. Knowledge that felt ineffable to an individual knower becomes articulate once the model supplies concepts and language they didn’t know were available.
The test: If an expert, working with an interlocutor with broader vocabulary and conceptual repertoire, can iteratively articulate knowledge until a competent third party can act on it, the knowledge was likely representationally tacit. If articulation fails even with extensive assistance, if “you just have to do it yourself to know,” the knowledge may be structurally tacit.
LLMs are extraordinarily capable interlocutors for this purpose. They have access to vocabulary and conceptual frameworks from across human knowledge. They’re patient. They can propose formulation after formulation until something clicks. They make the test practical in ways it never was before.
Learning from Showing, Not Just Telling
Expressive prosthetics are only part of the story. Multimodal models are developing another capability that directly attacks tacit knowledge: they can learn from demonstrations.
Many classic examples of tacit knowledge involve “you have to see it done” or “you learn it at the elbow of a master.” The knowledge was tacit in part because the only transmission mechanism was experiential. Watching, imitating, practicing under guidance.
Multimodal models ingest video, audio, and sensor streams. They can process vastly more demonstrations than any human apprentice, identify patterns across them, and encode those patterns into representations that can be deployed, transferred, and refined.
What was once “you have to see it done” becomes training data. The master’s demonstration becomes a signal rather than an ineffable transmission. The knowledge doesn’t disappear. It moves from tacit-for-humans to explicit-for-system.
The Governance Challenge
This brings us to a complication that organizations are only beginning to recognize.
As knowledge moves from tacit-for-humans to explicit-for-systems, it doesn’t necessarily become transparent-for-humans. A model can encode the expert’s pattern recognition into weights and parameters that produce reliable outputs. But those weights may not be interpretable. The knowledge is effectively explicit (reproducible, transferable, operationalizable) without being inspectably explicit (readable, auditable, contestable by humans).
This creates a new governance gap. Traditional tacit knowledge was at least located in identifiable humans who could be consulted, questioned, and held accountable. System-level knowledge may be operational without being legible. The organization can reliably act on it without anyone being able to fully explain it.
This isn’t necessarily a problem. We operate complex systems all the time without full understanding. But it demands new governance structures. Who is accountable when the system acts on knowledge that no human can fully articulate? How do we audit processes that work but that we cannot fully inspect? How do we contest decisions grounded in patterns we cannot consciously identify?
These are real governance questions. They deserve real answers.
What Remains
The practically ineffable region for organizations is shrinking. Knowledge that was blocked by expressive deficits is yielding to LLM-assisted articulation. Knowledge that was transmitted only through demonstration is yielding to multimodal learning. Knowledge that was locked in individual expertise is yielding to systems that can detect, encode, and deploy patterns at scale.
But a structurally tacit human core remains. Lived experience. Deep value commitments. The felt sense of what matters. The integration of perception, emotion, and action that constitutes skilled human judgment in its fullest form. These are not representational problems waiting for better tools.
More of what organizations have called “tacit” than they assume was representationally blocked, not structurally inexpressible. Those blocks are dissolving. The boundaries of the articulable are moving faster than most leaders recognize, and the organizations that grasp this will capture knowledge their competitors still believe is uncapturable.
The machinist’s feel for tolerance. The sales director’s deal intuition. The nurse’s pattern recognition. Some of this is genuinely human, genuinely irreducible. Much of it is just waiting for the right interlocutor. [See Answers to Critics for why the original “most” was too strong a claim.]