Featured Product
This Week in Quality Digest Live
Innovation Features
Eric Whitley
Robotic efficiency coupled with human intuition yields a fast, accurate, adaptable manufacturing system
InnovMetric Software
One software capable of operating portable metrology equipment and CMMs within the same user interface
MIT News
Mens, Manus and Machina (M3S) will design technology and training programs for human-machine collaboration
Gleb Tsipursky
The future of work is here, and AI is the driving force
Del Williams
Starting up on time and with confidence

More Features

Innovation News
System could be used to aid monitoring climate and coastal change
Simplify shop floor training through dynamic skills management
Oct. 17–18, 2023, in Sterling Heights, Michigan
Enables scanning electron microscopes to perform in situ Raman spectroscopy
Showcasing the latest in digital transformation for validation professionals in life sciences
Supports back-end process control
Transforming the development and optimization of bioprocesses using Tetra data
For processed, frozen, and preprocessed vegetables, confections, and more
Signalysis SigQC software now fully integrated with MECALC QuantusSeries instrumentation

More News

Innovation

Artificial Intelligence: Rewards, Risks, and Regulation

Building the future of AI is all about asking the right questions

Published: Monday, September 11, 2023 - 12:02

Untitled Document

In everyday life, the most common conversation about artificial intelligence (AI) goes along the lines of, “I used ChatGPT, and it did x.” Corporate leaders, governments, and international organizations, however, are having a very different conversation. Theirs is about how the benefits of AI can be used in ways that outweigh the risks.

Some argue we need to urgently regulate AI. Others compare AI to the nuclear movement, and some even warn that it will end the world. At the same time, many consultants and startups would have us believe that AI is the cure-all for all our commercial and personal ills, including love, life, and lethality.

It’s too early to draw conclusions, but it’s important that the right people are having the right conversations. Only then can this groundbreaking technology support and empower humankind.

Asking the right questions

There are many conversations about AI that we’re not having that we should. These include the broader societal implications of speeding up inequality and reducing people to data points, to where they can be presented as redundant or no longer valuable. Every development in science throughout history has had benefits and risks. In fact, historical failures can teach us lessons to avoid making the same mistakes. Although distinct in some ways, AI poses many of the same potential pitfalls as previous paradigm shifts. Overpromise, underplayed risk, and commercial interests swaying conversations aren’t new. So what is new? And why should we care?

 International Standards can ensure that the right conversations are being had by the right people using a shared language.

Much of what we’re talking about is old. Language models have been around since the 1950s, when Joseph Weizenbaum, the creator of one of the first chatbots (Eliza), coined the idea of magical thinking around language models. More recently, data science communities themselves started raising concerns about some proposals for how we could use ChatGPT2—including automated sentencing, potentially to the death penalty, without human intervention. Although the tech is now supercharged on larger datasets, many of the old issues remain. What is new is the speed and scale of these models—and where their data are coming from.

Governance

The good news is, a whole governance toolbox exists already. This includes international and national legislation around intellectual property, corporate behavior, human rights, discrimination, contracts, and privacy—just to name a few. Many experts around the world, such as Edward Santow, have long advocated for the upskilling of lawyers so they can understand and apply both existing legislation and new technologies within their profession.

In parallel to legislation, however, more regulation should also be considered. There are regulatory frameworks already in place, such as the recently formulated EU AI Act, the National Institute of Standards and Technology’s AI Risk Management Framework, and China’s new policy on AI. But some need updating or revision, and there are gaps. And where there are gaps, we should regulate.

Mitigating risk, maximizing reward

The fact is that we can’t think about AI risks along conventional lines. Andrew Maynard, professor at Arizona State University and a longstanding expert on risk, stands firm on this: Traditional thinking just doesn’t “get us to where we need to be.”

International Standards like those by ISO/IEC JTC 1/SC 42 on AI management will help to bridge these gaps in regulation. They empower decision makers and policymakers to create consistent data and processes in a way that is auditable. This will add long-term value to businesses in many ways, including for environmental reporting, operability, and credibility with stakeholders. This approach will ensure that rewards outweigh risks, in line with regulations and other governance tools.

Data ethics also have a role to play. If used and applied properly, data ethics can help to foster a desire—from leadership decisions to everyday tasks—to “do things not just because you can, but because you should.”

But most important, International Standards can ensure that the right conversations are being had by the right people using a shared language. It may take time to build the regulatory tools and the culture we need. But International Standards can help ensure we strike the right balance of risk and reward.

Published Aug. 21, 2023, in ISO News.

Discuss

About The Authors

Kobi Leins’s picture

Kobi Leins

Kobi Leins is a global expert in AI, international law, and governance. A researcher in digital ethics, prolific speaker, and author of numerous publications, she has played a pivotal role in advancing AI understanding. Her work bridges innovation and real-world applications, making complex concepts accessible to a diverse audience.

ISO’s picture

ISO

The International Organization for Standardization (ISO) is the world’s largest developer and publisher of international standards. ISO is a network of the national standards institutes of 162 countries, one member per country, with a Central Secretariat in Geneva, Switzerland, that coordinates the system. ISO is a nongovernmental organization that forms a bridge between the public and private sectors. ISO enables a consensus to be reached on solutions that meet both the requirements of business and the broader needs of society. View the ISO Standards list.