Featured Product
This Week in Quality Digest Live
Customer Care Features
Etienne Nichols
How to give yourself a little more space when things happen
Jennifer Chu
Findings point to faster way to find bacteria in food, water, and clinical samples
NIST
Smaller, less expensive, and portable MRI systems promise to expand healthcare delivery
William A. Levinson
Automation could allow baristas to be paid more and still net higher profits for company
Peter Fader
In an excerpt from The Customer-Base Audit, the authors ask critical questions

More Features

Customer Care News
Precision cutting tools maker gains visibility and process management across product life cycles
A Heart for Science initiative brings STEM to young people
Three new single-column models with capacities of 0.5 kN, 1 kN, and 2.5 kN
Recognition for configuration life cycle management
Delivers real-time, actionable 3D data across manufacturing and business operations
On the importance of data governance in the development of complex products
Base your cloud strategy on reliable information
Forecasts S&A subsector to grow 9.2% in 2023
Facilitates quick sanitary compliance and production changeover

More News

NIST

Customer Care

There’s More to AI Bias Than Biased Data, NIST Reports

Rooting out bias in artificial intelligence will require addressing human and systemic biases, too

Published: Wednesday, March 30, 2022 - 12:02

As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases—beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.

The recommendation is a core message of a revised NIST publication, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” which reflects public comments the agency received on its draft version released in summer 2021. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework NIST is developing.

According to NIST’s Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.

“Context is everything,” says Schwartz, principal investigator for AI bias and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

NIST contributes to the research, standards, and data required to realize the full promise of AI as an enabler of U.S. innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias.

There's more to bias than meets the eye

Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan, or accepted as a rental applicant. It’s relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while these computational and statistical sources of bias remain highly important, they don’t represent the full picture.

A more complete understanding of bias must take into account human and systemic biases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person’s neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic, and computational biases combine, they can form a pernicious mixture—especially when explicit guidance is lacking for addressing the risks associated with using AI systems.

“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology.”
—Reva Schwartz, principal investigator for AI bias

To address these issues, NIST authors make the case for a “socio-technical” approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context—and that purely technically based efforts to solve the problem of bias will come up short.

“Organizations often default to overly technical solutions for AI bias issues,” Schwartz says. “But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.”

Socio-technical approaches in AI are an emerging area, Schwartz says, and identifying measurement techniques to take these factors into consideration will require a broad set of disciplines and stakeholders.

“It’s important to bring in experts from various fields—not just engineering—and to listen to other organizations and communities about the impact of AI,” she says.

NIST is planning a series of public workshops during the next few months aimed at drafting a technical report for addressing AI bias and connecting the report with the AI Risk Management Framework. For more information and to register, visit the AI RMF workshop page.

First published March 16, 2022, on NIST News.

Discuss

About The Author

NIST’s picture

NIST

Founded in 1901, the National Institute of Standards and Technology (NIST) is a nonregulatory federal agency within the U.S. Department of Commerce. Headquartered in Gaithersburg, Maryland, NIST’s mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.