Featured Product
This Week in Quality Digest Live
Innovation Features
Oak Ridge National Laboratory
Hafnium oxide key to novel applications
David Suttle
What is breakthrough technology really capable of?
David Cantor
This article is 97.88% made by human/2.12% by AI
Eric Whitley
Robotic efficiency coupled with human intuition yields a fast, accurate, adaptable manufacturing system

More Features

Innovation News
To be unveiled during PACK EXPO Las Vegas at the Hiperbaric booth, No. N-10857
Educating the next generation of machinists with state-of-the-industry equipment
In a first, researchers have observed how lithium ions flow through a battery interface
Air-extend/spring-retract moves probe out of the way
Precision cutting tools maker gains visibility and process management across product life cycles
Expanded offering includes Smartscope E-series for 3-axis video measurement
Accelerates CAM programming time 80% to make U.S. manufacturers more productive
Pioneers new shape-memory alloys

More News

Clifton B. Parker

Innovation

Exploring the Human Side of Artificial Intelligence

‘We need a deeper debate about which tasks need to stay in the hands of the public, and out of the market.’

Published: Thursday, December 12, 2019 - 13:02

An underlying theme emerged from the Stanford Institute for Human-Centered Artificial Intelligence’s fall conference: Artificial intelligence (AI) must be truly beneficial for humanity and not undermine people in a cold calculus of efficiency.

Titled “AI Ethics, Policy, and Governance,” the event brought together more than 900 people from academia, industry, civil society, and government to discuss the future of AI (or automated computer systems able to perform tasks that normally require human intelligence).

Discussions at the conference highlighted how companies, governments, and people around the world are grappling with AI’s ethical, policy, and governance implications.

Expanding human experience

Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business and faculty associate director at Stanford HAI, spoke about AI’s impact on the economy. It’s critical, she said, that AI creates shared prosperity and expands—rather than replaces—the human experience in life and at work. Humans, after all, understand things in a way that may be difficult to codify in AI. How we organize and think about the future of work for people as well as machines is important, as it is all interconnected, she added.

“The real benefits of AI come when we dive into the applications and understand the entire vertical, everything through implementation, including the ethics and the feelings of people adopting it,” Athey said.

Erik Brynjolfsson, director of the Initiative on the Digital Economy at MIT, said companies building AI need to focus on the human side in addition to the eye-popping technology. “We need to understand first what our values are so we can understand how best to use these technologies.” He said it’s necessary to rethink the whole organizational and business process in terms of how AI fits in with the human culture.

Other panelists discussed the roles of public entities and private enterprise when it comes to regulating AI.

Eric Schmidt, the former CEO of Google and technical advisor to Alphabet Inc., spoke with Marietje Schaake, a Dutch member of the European Parliament who played a role in the European Union’s regulation of big tech and is the Stanford Cyber Policy Center’s international policy director.

Schmidt noted that ethics matter in how a human decision is combined with an AI decision and said that “liberal, Western values” are important to support at a time when countries like China are using AI technology to repress and surveil their own people. “We want to make sure the systems we’re building are built on our values, human values,” he said.

Schaake urged that policymakers worldwide should take a citizen-oriented approach to AI policies and regulations, rather than follow a more corporate, user-oriented framework. She advocated greater regulation of how tech companies use big data and stronger privacy protections for individuals, and urged that regulation should happen sooner rather than later in the case of AI.

“We need a deeper debate about which tasks need to stay in the hands of the public and out of the market,” she said.

Ethics, geopolitics, and diversity

Reid Hoffman, co-founder of LinkedIn, talked about his concept of “blitzscaling,” a set of techniques learned at Silicon Valley companies to develop innovations quickly. Hoffman said this should happen in AI simultaneously with a sense of ethics and responsibility.

“We need a deeper debate about which tasks need to stay in the hands of the public, and out of the market.”
Marietje Schaake

For example, when fast-growing companies plan for the future and quickly build up their engineering or sales capabilities, they also need to anticipate risk and what could go wrong on the road ahead. This means hiring people who understand risk and ethics, and developing a risk framework for the company that is combined with a sense of ethics.

In the area of healthcare and disease, DJ Patil, the head of technology for Devoted Health, noted how AI holds tremendous promise for treating people and saving lives: “We need to go at maximum warp speed to help those people.” The challenge is how to bring those cures and treatments to market quickly while also adhering to the necessary healthcare safeguards and ethical sensibilities.

Patil also called for more cooperation on data sharing around the world. “We have climate change, the potential for pandemics. What we need is better international frameworks, treaty mechanisms to share data across regional lines so that we can actually work on human problems.”

AI and national security

In an AI and geopolitics breakout session, led by Amy Zegart, a senior fellow at the Freeman Spogli Institute for International Studies and at the Hoover Institution, panelists analyzed the nature of artificial intelligence; its role in national security, intelligence, and safety systems; and how it may affect strategic stability—or instability.

On the latter, Colin H. Kahl, co-director of Stanford’s Center for International Security and Cooperation, raised concerns about whether AI would increase economic tensions among the world’s most powerful nations and alter the global military balance of power if some countries move ahead quickly on AI while others fall behind. Another concern he mentioned was the possibility of someone using AI-enabled cyberweapons against nuclear command-and-control centers.

Zegart added that machine learning can help lighten the cognitive load when intelligence specialists are analyzing and sifting through data, which today is being produced at an accelerated rate. The challenge is organizational, as bureaucracies are slow to adopt game-changing technology.

First published Nov. 7, 2019, on Insights by Stanford Business.

Discuss

About The Author

Clifton B. Parker’s picture

Clifton B. Parker

Clifton Blue Parker is a writer, content strategist, and senior consultant at Clifton B. Parker Media in San Carlos, California. Parker served 5 years as director of communications at Stanford University and 12 years as the senior public information representative at UC Davis.