I’m further exploring the notion of models and mental models. We often speak of mental models as though they’re neat packages of knowledge stored somewhere in the mind. These models are typically treated as internal blueprints and as simplified representations of the world that help us navigate and make decisions. But what exactly do we mean when we call something a model? And are we always speaking about the same kind of thing?
ADVERTISEMENT |
The term model, in both technical and informal contexts, carries more ambiguity than we often acknowledge. In classical cybernetics, W. Ross Ashby gave the concept a central role. For him, a model was a representation that could simulate the behavior of a system. A good regulator, he argued, must contain a model of the system it seeks to control. This model didn’t need to be a literal image or a complete mirror. It simply needed to have the right kind of functional correspondence with just enough structure to predict and act upon.
Ashby’s definition is rigorous and functional. The model need not share the same physical form or medium as the system it regulates. What matters isn’t material resemblance but structural correspondence across selected variables. The model must preserve the relations and transformations that enable viable regulation. Ashby called this “isomorphism.” This isomorphism doesn’t demand total replication. It requires that the model preserve only those relations necessary for viable control. This is the basic premise of first-order cybernetics.
This isomorphic correspondence is what makes the model useful for regulation. The regulator can manipulate the model, run it forward, test interventions, explore possibilities, and trust that the results will map back to the actual system. The model becomes a kind of structural analog—a way of capturing pattern without requiring material similarity.
When we look deeper, something about this view of models can feel distant. It risks separating the observer from the observed, the knower from the known. It tends toward a view of knowledge that’s separated from lived experience. What does it mean for an organism to contain a model of its world if that organism isn’t a computer but a living, breathing being?
This is where the “Thousand Brains Theory” of intelligence offers a helpful contrast. Jeff Hawkins, in developing his hypothesis, suggests that intelligence arises not from a single unified model of the world but from many partial models working in parallel. Here, however, Hawkins seems to use model in a markedly different sense than Ashby’s isomorphic structures. For Hawkins, a cortical column’s model isn’t a representation that stands apart from experience but a learned pattern of prediction embedded within sensorimotor engagement itself.
Each cortical column builds what Hawkins calls a model of objects in the world. But this model is constituted by the column’s capacity to predict sensory sequences as the body moves through space. The column doesn’t store a picture of a coffee cup. Instead, it develops expectations about what sensations will follow from particular movements when encountering cup-like patterns. Some of these may be visual, some tactile, while others may be of a different sense altogether. The model isn’t a static thing but a dynamic process. It’s a way of being attuned to specific sensorimotor regularities.
Although Hawkins retains the term model, his use stretches its meaning. These patterns may not be models in the traditional sense at all. When we say a cortical column builds a model or learns expectations, we may still be trapped in representational thinking. The cortical column doesn’t store information about objects. It maintains patterns of connectivity shaped by experience. These patterns don’t represent the world per se. Instead, they enact a way of being responsive to it. A column’s knowledge of a coffee cup isn’t a stored description, but a readiness to engage with cup-like affordances. This is the key nuance I’d like to offer.
This view of modeling resonates with Martin Heidegger’s phenomenological understanding of being-in-the-world. Heidegger once noted that a hammer isn’t first known through its shape or composition, but through its use. It becomes present to us as ready-to-hand, as something we know by doing. Similarly, a cortical column knows an object by interacting with it, not by storing a detached image of it. As Heinz von Foerster once said, “If you want to see, learn how to act.”
In earlier reflections, I explored the limitations of treating mental models as internal representations. When we interact with a system or object, we’re not retrieving stored pictures. Instead, we’re drawing upon a history of lived engagement. Our orientation isn’t merely cognitive, but bodily and situated. The notion of a model here becomes something that reveals itself through action, not inspection.
The Thousand Brains Theory reinforces this idea by showing how perception and prediction are distributed. A single cortical column may only know part of an object in a specific sensory dimension, but through movement and integration with other columns, it participates in a kind of collective intelligence. There’s no master map, only partial perspectives constantly updating and coordinating with one another. The columns aren’t comparing models. They’re participating in a dynamic process of mutual constraint and coordination.
This is what Humberto Maturana and Francisco Varela would recognize as structural coupling. Each column’s activity is shaped by its coupling with other columns, with the body, and with the environment. The result is a network of mutual specification, rather than a collection of independent representations.
Intelligence, in this view, emerges not from the integration of discrete models but from the ongoing attunement of multiple sensorimotor streams. This attunement is guided not by accuracy but by viability. Viability is the organism’s capacity to maintain its structure and continue its pattern of living. It’s often misunderstood that accuracy directly correlates with viability. The external world presents more complexity than any cognitive system can represent in full. The response, shaped by both constraint and energetic efficiency, isn’t to build exhaustive models but to maintain abstractions that are good enough. These are not symbolic summaries but embodied dispositions formed through recurrent interaction.
This isn’t a flaw but a feature of adaptive beings. Cognitive structures aren’t designed to capture the world exhaustively but to filter it selectively. The principle of structural coupling rests on repetition. It rests on the organism’s ability to reinforce useful patterns over time. What endures aren’t accurate representations but habits of orientation that have proven viable. Cortical columns don’t construct truthful depictions of the world. They cultivate ways of engaging that preserve continuity and coherence within the organism’s domain of living.
This stands in contrast to the classical view where the model is assumed to be singular, coherent, and representational. The model isn’t something we hold apart from the world but something we become a part of through interaction with it.* This framing aligns with the constructivist view that organisms are informationally closed. An organism doesn’t passively receive information from an objective world. It brings forth a world through its own structural coupling.
What we call a model, then, isn’t a mirror of external reality but a structure of engagement, a dynamic fit between the organism and its environment.
The language of structure is important. Rather than thinking of models as things organisms have, we might think of them as patterns organisms are. A cortical column’s responsiveness to a coffee cup isn’t something it possesses but something it enacts. The pattern of connectivity isn’t a representation of the cup, but a way of being coupled to the cup’s affordances. Whether we call these models, structures of prediction, or patterns of skilled engagement, what unites them is that they’re not static descriptions. They’re emergent dispositions formed through repeated interaction. Each term foregrounds a different aspect, such as structure, process, or habit. However, they all point to intelligence as enacted rather than mirrored.
This isn’t to dismiss Ashby’s insight. His use of the term model was never about mirroring for its own sake. It was about enabling viable regulation and constructing just enough structure to explain and act. Perhaps it’s more accurate to think of such models as habits of expectation. They’re not representations but anticipations. They don’t describe the world as it is but orient us toward what’s likely to come. They’re pragmatic, situated, and always in motion.
Or perhaps the term model itself is too burdened. What we call a model may be better understood as a form of skilled attunement. It becomes a pattern of responsiveness that’s cultivated through history, shaped by constraints, and sustained by viability. The cortical column doesn’t model the coffee cup. It simply becomes responsive to it.
This reframing opens up deeper questions. If intelligence isn’t the construction of better representations but the cultivation of more viable engagements, what does this mean for artificial intelligence? Can machines learn to be responsive rather than simply predictive? Can they participate in the world, rather than map it?
The Thousand Brains Theory, interpreted through the lens of structural coupling and lived engagement, suggests that intelligence emerges not from central models but from richly distributed interactions. It implies that robust intelligence doesn’t require more accurate representations but more diverse ways of being coupled to the world.
To model, in this deeper sense, is to engage. It is to live into a world that reveals itself not all at once, but gradually through action, adjustment, and care. Perhaps, the real power of what we call a model may not lie in what it represents but in what it enables us to do—or more accurately, in what it allows us to become.
Final words
This shift from models as internal representations to models as patterns of skilled engagement challenges deeply held assumptions about knowledge, cognition, and intelligence. It’s not merely a technical redefinition; it’s a philosophical turning. If cognition isn’t about mirroring the world but about maintaining a viable relation to it, then intelligence becomes a matter of fitting rather than mapping. It’s not about what we store, but about how we respond. Even this article isn’t free of modeling. It draws distinctions, frames structures, and builds conceptual pathways. But it does so with an orientation toward viability, not toward finality.
The second-order reflexive nature of this inquiry (modeling the limits of models) underscores the point. Intelligence isn’t found in having the final answer, but in remaining open to reframing, recoupling, and reengaging as the world shifts around us.
This reframing also casts new light on the ambitions of artificial intelligence. If intelligence isn’t the construction of better representations but the cultivation of more viable engagements, then it becomes clear that AI systems, as currently conceived, may be fundamentally limited. The limitation isn’t merely technical; it’s existential. Intelligence, in this deeper sense, emerges from embodied interaction, historical coupling, and recursive responsiveness to a world that matters. Machines that manipulate symbols or detect statistical regularities may approximate aspects of intelligent behavior, but they remain ungrounded in the affective, bodily, and experiential dynamics that make living cognition what it is. Responsiveness isn’t a product of prediction alone. It emerges from vulnerability, concern, and the need to maintain coherence amid complexity.
Without changes in their environment shaping how they persist, machines may simulate participation. But they don’t truly engage. They act without inhabiting. They process without perspective. Perhaps this is one of the main reasons artificial intelligence may fall short of achieving sentience. It relies on static, internal representations and lacks the embodied, experiential living necessary for understanding, concern, or care. Without lived coupling, there may be behavior but not presence. There may be processing but not perspective.
While navigating complexity, my hope is that this reframing offers both humility and hope: Humility, because it reminds us that our understanding is always partial and situated; hope, because it suggests that intelligence isn’t a fixed capacity but a living process which is co-created and transformed through our engagements with the world and with each other in a social realm.
I’ll finish with an excellent quote from Ezequiel Di Paolo, Marieke Rhohde, and Hanne De Jaegher:
“Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems are simply not in the business of accessing their world in order to build accurate pictures of it. They participate in the generation of meaning through their bodies and action, often engaging in transformational and not merely informational interactions; they enact a world.”
* Hat tip to Heinz von Foerster’s wonderful quote: “Am I apart from the universe, or am I a part of the universe?”
Published July 11, 2025, in Harish Jose’s blog.
Add new comment