{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Contact
    • Training

How to Rein in the AI Threat?

Set the lawyers loose

Gleb Tsipursky
Wed, 05/03/2023 - 12:03
  • Comment
  • RSS

Social Sharing block

  • Print
Body

A ccording to a recent Monmouth University poll, 55 percent of Americans are worried by the threat of artificial intelligence (AI) to the future of humanity. In an era where technological advancements are accelerating at breakneck speed, it’s crucial to ensure that AI development is appropriately monitored. As AI-powered chatbots like ChatGPT become integrated into our daily lives, it is high time we address the potential legal and ethical implications of the technology.

ADVERTISEMENT

Some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI; Steve Wozniak, the co-founder of Apple; and more than 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.

However, the problem with these proposals is coordinating the numerous stakeholders from a wide scope of companies and government figures required. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.

 …

Want to continue?
Log in or create a FREE account.
Enter your username or email address
Enter the password that accompanies your username.
By logging in you agree to receive communication from Quality Digest. Privacy Policy.
Create a FREE account
Forgot My Password

Comments

Submitted by Anonymous (not verified) on Wed, 05/03/2023 - 09:38

Is AI an entity, or a tool?

If I take the viewpoint that ChatGPT (as a primary example of a larger whole) is a tool and not a specific entity, I think I don't put blame on that tool for the job it does, I would blame the controlling entity.  Would that entity be the AI developer?  One would not sue Craftsman for a screwdriver improperly setting a nail.  

So, for AI developers to be held responsible for hate speech (again, as a prime example of the whole), seems the wrong way, or at least not the most correct way, to proceed.  

To your loud music bothering neighbors example, we don't sue the musician who recorded the album, nor the record maker or the jukebox manufacturer (perhaps I'm out of date, eh?).  If volume is the problem, the responsibility sits with who controls the knob.  

I can see the need for establishing some robust governance, though, before the big suits start coming in.  I don't know that slowing down AI development will get that establishment rolling in advance.  More likely, the establishment of governance will slow as well, and be in the same trouble - but with slower tech advancement.  

  • Reply

Submitted by William A. Levinson on Thu, 05/04/2023 - 10:55

The user should be ultimately responsible

I think the user should be ultimately responsible for what the AI does. If for example the AI writes something false about a competitor's product in response to a prompt to write a comparison between one's own product and the competitor's, the user should have read what the AI generated and removed the false information. I would not sign my name to something written by another person (or a computer) without reading it.

I've seen the excuse "the computer did it" applied to moderation of social media sites as well, e.g. the software removed somebody's post because it determined, falsely, that it violated the platform's policies. This is not an excuse. If one programs a computer to do something, one is responsible for what it does.  

  • Reply

Add new comment

Image CAPTCHA
Enter the characters shown in the image.
Please login to comment.
      

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us
  • Contact Us