{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Roadshow
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Roadshow
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Training

On Diversity as a Cybernetic Necessity

Machines and societies must maintain openness, embrace difference, and preserve the friction that keeps life viable

Eric Prouzet / Unsplash

Harish Jose
Bio

Harish’s Notebook

Wed, 12/03/2025 - 12:02
  • Comment
  • RSS

Social Sharing block

  • Print
Body

In this article, I want to explore an idea that often is framed in moral terms but is actually a cybernetic imperative: the necessity of diversity for viable systems. Whether we’re talking about societies, organizations, or even artificial intelligence systems, the principle remains consistent. A system that suppresses differences suppresses the very disturbances that give it life.

ADVERTISEMENT

This insight comes from cybernetics, and it helps us understand why diversity matters beyond moral arguments.

The cybernetic case for diversity

A society’s resilience, and therefore viability, emerges more from difference than agreement. When I think about what makes communities sustainable over time, I keep returning to this basic insight from cybernetics: Without variation, a system can’t absorb disturbance. This is, of course, a simpler rephrasing of Ashby’s Law of Requisite Variety. Without challenge, a system can’t correct itself. Without friction, a system can’t renew its distinctions.

This becomes clearer when we think about information, distinction, and correction. Every observer draws distinctions. Every distinction creates a horizon of what can be noticed next. Every act of understanding sets the conditions for future understanding. For this reason, no observer, no community, and no language can remain viable without exposure to other perspectives. A view from nowhere is an impossibility.

Difference isn’t an obstacle to communication. Difference is what makes communication meaningful. As Gregory Bateson described it, information is the difference that makes the difference.

A community that supports only one way of thinking, one way of speaking, or one way of being slowly loses the very conditions that allow it to remain viable. When everyone thinks alike, quality begins to decay. Ideas become smoother but thinner. Creativity doesn’t disappear because people stop trying. It disappears because nothing pushes back. Nothing resists. Nothing surprises.

I’ve written before about von Foerster’s ethical imperative to increase the number of choices. Here, I want to extend that thinking to show why diversity is a condition for viability, not merely a moral preference.

The negotiable space

A society with many ways of speaking has many ways of seeing. It has many ways to reframe a problem, many ways to interpret events, many ways to challenge assumptions, and many ways to correct errors. It’s able to sustain what I call a “negotiable space” in which ideas can be contested, sharpened, and sometimes abandoned.

This continual negotiation is what keeps concepts alive. It’s what makes meaning robust. It’s what makes a collective capable of navigating uncertainty.

The negotiable space is the environment in which language, ideas, and understanding evolve and error correction happens. It’s created not by agreement but by the friction of difference. Human cognition isn’t viable in isolation. It’s viable only when embedded in a world where every utterance is exposed to other minds—resisted, questioned, corrected, or refined.

I see this friction as the medium of viability. When someone challenges your idea, asks for clarification, or pushes back against an assertion, they’re not merely disagreeing. They’re sustaining the recursive loop that keeps understanding alive. Without friction, distinctions decay. Without challenge, knowledge becomes brittle.

A word is never alone. It survives only through the continual friction of conversation. It carries a lineage of previous uses and a horizon of possible future uses. It remains viable only because a social world holds it accountable.

When homogeneity replaces diversity

When we reduce the diversity of perspectives, the negotiable space begins to shrink and close. Without enough difference, language becomes flatter. Categories become rigid. Distinctions become dull. Error correction becomes weak. The collective loses the source of renewal that once made it resilient.

Attempts to homogenize societies have produced similar outcomes throughout history. They create environments that look orderly from the outside but are fragile on the inside. Homogeneity amplifies the illusion of stability while stripping away the mechanisms that produce actual stability. A system without variation becomes a system without resilience. It stops promoting learning and staying curious. It stops promoting error correction. Eventually it stops being able to sustain itself at all.

We see this pattern repeat in several contexts. A social world in which every voice echoes the same pattern begins to collapse inward. Its range of distinctions shrinks. Its ability to adapt weakens. Its capacity to navigate uncertainty fades.

Recursion requires disturbance

In cybernetics, stability isn’t the absence of disturbance. It’s the capacity to absorb disturbance without collapse. This requires variation. It requires the presence of alternatives. It requires a dynamic interplay of perspectives.

A system that eliminates disturbance doesn’t become more stable. It becomes brittle. Without contradiction, the recursive loop of understanding begins to stagnate. Without challenge, the distinctions that support cognition degrade. Without tension, the structures that produce meaning weaken.

Human cognition remains viable because its recursion is continually informed by a social world rich in disagreement. An individual doesn’t refine understanding alone. Understanding is sharpened by exposure to other interpretations. These interpretations emerge from diverse backgrounds, diverse experiences, and diverse cognitive histories.

Now I want to now take this train of thought to large language models.

The case of large language models

Large language models (LLMs) are often described as systems that learn from vast amounts of data. But what they learn isn’t raw experience. They learn from the residue of human meaning-making. They learn from language that has already passed through the recursive loops of human correction. They inherit the stability produced by these loops, but they don’t participate in the loops themselves—at least not in the same way.

An artificial intelligence doesn’t inhabit a social world where its utterances are corrected by others. It doesn’t participate in the negotiable space through which language evolves. It doesn’t receive feedback proportional to the scale of its output. It doesn’t face the resistances that keep human cognition aligned with the world. This is an important distinction that leads to interesting outcomes.

A human remains viable because every use of language is exposed to correction. An AI remains unchallenged because its output overwhelms the capacity for correction to flow back.

The collapse of the negotiable space

A living language depends on a balance between output and correction. Human linguistic communities have historically generated meaning at a rate the community can digest. New terms emerge. Old terms fade. Misunderstandings provoke clarification. Disagreements produce refinement.

This equilibrium gets disrupted. The scale of machine-generated text has exceeded the capacity of human communities to critique it. The negotiable space, the space where meaning is contested and corrected, is now flooded. Variations in meaning that once signaled novelty are now drowned in statistical smoothness. The framework receives too much of its own output and too little balanced resistance.

A framework that receives little correction can’t maintain the integrity of its distinctions. It will start to drift. It will begin to feed on its own unchallenged productions. The range of distinctions therefore shrinks. The recursive loop that once sharpened meaning begins to flatten it.

At first, the effects are subtle. Over time, the trajectory becomes clearer. A structure that can’t renew itself through grounded critique will drift toward diminishing returns. More scale won't resolve this. Faster generation will only accelerate the loss of the very conditions that once made the system appear intelligent.

Here, we see what I call the amplification of constraints in action: The model grows in output yet declines in viability. It’s simultaneously expansive and fragile.

The coming burst

All of this seems to indicate that the AI bubble may burst in the near future. The ability of LLMs to be trained fast and to generate fast may become their downfall. Paradoxically, the better the LLM becomes, the faster this downfall may occur. Each improvement accelerates the collapse of the negotiable space. Each refinement increases the volume of uncorrected output flowing back into the system. Each new iteration tightens the closure that limits its future.

This is also a cautionary insight for societies that reject diversity and embrace homogeneity. Any system that narrows its space of variation, whether a community or a computational model, risks collapsing under the weight of its own uniformity.

The burst may come not because the models are weak, but because they’re strong in the wrong direction. They refine themselves into a narrowing corridor. They amplify a recursion that can’t sustain itself. They accelerate toward diminishing returns.

The lesson for ‘systems’ design

Human cognition has survived because it’s recursive from the inside and is in a social realm. Artificial intelligence’s recursion is lifeless. This difference matters. A system that doesn’t participate in the social negotiation that gives words their life can’t maintain the vitality of its distinctions. It can’t renew its closure through lived coordination with others. It can only repeat the patterns it has been given.

Large language models may not become artificial general intelligence by accelerating the very process that undermines their viability. They’re not suited to replace the human capacity for negotiated meaning-making. Their true value lies in augmentation, not imitation. They support human thought. They don’t replace the recursive, socially grounded, diversity-dependent mechanisms that make human thought viable.

Every viable system must remain open to disturbance. The observer must remain open to being surprised. The language community must remain open to contradiction. A system that avoids disturbance doesn’t stabilize; it stagnates.

Final words

The warning is clear for both machines and societies: Maintain openness, embrace difference, and preserve the friction that keeps life viable.
• A system without diversity collapses.
• A recursion without resistance decays.
• A language without a negotiable space drifts into incoherence.

This isn’t merely about being open-minded or tolerant. It’s about understanding the conditions that allow any system (biological, social, or computational) to remain viable over time. Diversity is a cybernetic requirement. Without it, we lose the capacity to correct ourselves, to adapt, and ultimately, to survive.

Always keep learning....

Published Nov. 8, 2025, in Harish Jose’s blog.

Comments

Submitted by William A. Levinson on Wed, 12/03/2025 - 11:17

Danger of Groupthink

General Patton, as I recall, wrote that, if everybody is thinking the same way, nobody is thinking. This reinforces the article's point about the need for diverse viewpoints.

https://www.rhodeshouse.ox.ac.uk/unlikeminded/neurodiversity-and-the-perils-of-groupthink/ "Groupthink. An invisible force that blew up the Challenger space shuttle, sapped billions of dollars from The Coca-Cola Company as it stumbled through its “New Coke” disaster in the 1980’s, bankrupted Swissair in 2002, and nearly plunged the world into nuclear armageddon in the wake of the Bay of Pigs disaster in 1961." Another reference cited Pearl Harbor, to the effect that US planners thought either "they can't" or "they wouldn't dare" despite Japan's former surprise attack on Russia in 1904, and a US simulation of an air attack on Pearl Harbor. https://www.worldwariiaviation.org/u-s-navy-exercise-simulated-pearl-harbor-attack-18-months-before-it-happened  "Adm. James O. Richardson, Commander in Chief of the U.S. Fleet, strongly objected, saying the Pacific Fleet would be the prime target of an attack by Japan. He was relieved of command in February 1941, for saying so, but the devastating Pearl Harbor attack ten months later proved him right."

The QD article also points out something I never thought of before. If all AIs think alike, they are subject to groupthink. This is why I will rarely trust an AI to do more than find online references I can read myself. One lawyer made the mistake of trusting an AI to research cases for him, and presented them to a judge as precedents only to find out that the AI made them up.

 

  • Reply

Add new comment

The content of this field is kept private and will not be shown publicly.
About text formats
Image CAPTCHA
Enter the characters shown in the image.

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us