{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Resource Management
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Supply Chain
    • Resource Management
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Training

On Diversity as a Cybernetic Necessity

Machines and societies must maintain openness, embrace difference, and preserve the friction that keeps life viable

Eric Prouzet / Unsplash

Harish Jose
Bio

Harish’s Notebook

Wed, 12/03/2025 - 12:02
  • Comment
  • RSS

Social Sharing block

  • Print
Body

In this article, I want to explore an idea that often is framed in moral terms but is actually a cybernetic imperative: the necessity of diversity for viable systems. Whether we’re talking about societies, organizations, or even artificial intelligence systems, the principle remains consistent. A system that suppresses differences suppresses the very disturbances that give it life.

ADVERTISEMENT

This insight comes from cybernetics, and it helps us understand why diversity matters beyond moral arguments.

The cybernetic case for diversity

A society’s resilience, and therefore viability, emerges more from difference than agreement. When I think about what makes communities sustainable over time, I keep returning to this basic insight from cybernetics: Without variation, a system can’t absorb disturbance. This is, of course, a simpler rephrasing of Ashby’s Law of Requisite Variety. Without challenge, a system can’t correct itself. Without friction, a system can’t renew its distinctions.

 …

Want to continue?
Log in or create a FREE account.
Enter your username or email address
Enter the password that accompanies your username.
By logging in you agree to receive communication from Quality Digest. Privacy Policy.
Create a FREE account
Forgot My Password

Comments

Submitted by William A. Levinson on Wed, 12/03/2025 - 11:17

Danger of Groupthink

General Patton, as I recall, wrote that, if everybody is thinking the same way, nobody is thinking. This reinforces the article's point about the need for diverse viewpoints.

https://www.rhodeshouse.ox.ac.uk/unlikeminded/neurodiversity-and-the-perils-of-groupthink/ "Groupthink. An invisible force that blew up the Challenger space shuttle, sapped billions of dollars from The Coca-Cola Company as it stumbled through its “New Coke” disaster in the 1980’s, bankrupted Swissair in 2002, and nearly plunged the world into nuclear armageddon in the wake of the Bay of Pigs disaster in 1961." Another reference cited Pearl Harbor, to the effect that US planners thought either "they can't" or "they wouldn't dare" despite Japan's former surprise attack on Russia in 1904, and a US simulation of an air attack on Pearl Harbor. https://www.worldwariiaviation.org/u-s-navy-exercise-simulated-pearl-harbor-attack-18-months-before-it-happened  "Adm. James O. Richardson, Commander in Chief of the U.S. Fleet, strongly objected, saying the Pacific Fleet would be the prime target of an attack by Japan. He was relieved of command in February 1941, for saying so, but the devastating Pearl Harbor attack ten months later proved him right."

The QD article also points out something I never thought of before. If all AIs think alike, they are subject to groupthink. This is why I will rarely trust an AI to do more than find online references I can read myself. One lawyer made the mistake of trusting an AI to research cases for him, and presented them to a judge as precedents only to find out that the AI made them up.

 

  • Reply

Add new comment

Image CAPTCHA
Enter the characters shown in the image.
Please login to comment.

© 2026 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us