Featured Product
This Week in Quality Digest Live
Innovation Features
Lee Seok Hwai
Give employees permission to generate ideas, nudge them in the right direction, then get out of the way
David L. Chandler
Greater sensitivity and speed for industrial inspection, airport security, and communications
Anton Ovchinnikov
The push-pull effects of customization
Robert Zaruda
Connecting education and industry to create careers
Christopher Dancy
Not binding, not enforceable, but a step in the right direction

More Features

Innovation News
High-performance polymers for use in high-temperature range developed for 2PP 3D printing
Technique may enable energy-efficient 3D printing of blades for gas turbines or jet engines
Easily synthesized chemical filter could stop greenhouse gas from reaching atmosphere
New medical product from Canon’s Video Sensing Division
Features include flexible installation and fast measurement
Guidance for deploying digital twins safely and securely

More News

Christopher Dancy

Innovation

‘AI Bill of Rights': Five Principles to Make Artificial Intelligence Safer, Less Discriminatory

Not binding, not enforceable, but a step in the right direction

Published: Tuesday, November 22, 2022 - 12:02

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations, such as the Los Angeles Police Department, where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of U.S. residents.

As a computer scientist who studies the ways people interact with AI systems—and in particular how anti-Blackness mediates those interactions—I find this guide a step in the right direction, even though it has some holes and is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting effect on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks companies to develop AI systems that don’t treat people differently based on their race, sex, or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may affect members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

Privacy, transparency, and control

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data are used and protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it, and asking in a way that is understandable to that person.

The next principle focuses on “notice and explanation.” It highlights the importance of transparency: People should know how an AI system is being used, as well as the ways in which AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

In this example the AI Bill of Rights provides a guideline that people in New York who are affected by the AI systems in use should be notified that AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration, and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application, and would have the option of opting out of that AI use in favor of an actual person.

Smart guidelines, no enforceability

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised about the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression—like racism or sexism—and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in healthcare have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Discuss

About The Author

Christopher Dancy’s picture

Christopher Dancy

Christopher Dancy is an associate professor of industrial and manufacturing engineering, and computer science and engineering, at Pennsylvania State University.