PROMISE: Our kitties will never sit on top of content. Please turn off your ad blocker for our site.
puuuuuuurrrrrrrrrrrr
Clifton B. Parker
Published: Thursday, December 12, 2019 - 13:02 An underlying theme emerged from the Stanford Institute for Human-Centered Artificial Intelligence’s fall conference: Artificial intelligence (AI) must be truly beneficial for humanity and not undermine people in a cold calculus of efficiency. Titled “AI Ethics, Policy, and Governance,” the event brought together more than 900 people from academia, industry, civil society, and government to discuss the future of AI (or automated computer systems able to perform tasks that normally require human intelligence). Discussions at the conference highlighted how companies, governments, and people around the world are grappling with AI’s ethical, policy, and governance implications. Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business and faculty associate director at Stanford HAI, spoke about AI’s impact on the economy. It’s critical, she said, that AI creates shared prosperity and expands—rather than replaces—the human experience in life and at work. Humans, after all, understand things in a way that may be difficult to codify in AI. How we organize and think about the future of work for people as well as machines is important, as it is all interconnected, she added. “The real benefits of AI come when we dive into the applications and understand the entire vertical, everything through implementation, including the ethics and the feelings of people adopting it,” Athey said. Erik Brynjolfsson, director of the Initiative on the Digital Economy at MIT, said companies building AI need to focus on the human side in addition to the eye-popping technology. “We need to understand first what our values are so we can understand how best to use these technologies.” He said it’s necessary to rethink the whole organizational and business process in terms of how AI fits in with the human culture. Other panelists discussed the roles of public entities and private enterprise when it comes to regulating AI. Eric Schmidt, the former CEO of Google and technical advisor to Alphabet Inc., spoke with Marietje Schaake, a Dutch member of the European Parliament who played a role in the European Union’s regulation of big tech and is the Stanford Cyber Policy Center’s international policy director. Schmidt noted that ethics matter in how a human decision is combined with an AI decision and said that “liberal, Western values” are important to support at a time when countries like China are using AI technology to repress and surveil their own people. “We want to make sure the systems we’re building are built on our values, human values,” he said. Schaake urged that policymakers worldwide should take a citizen-oriented approach to AI policies and regulations, rather than follow a more corporate, user-oriented framework. She advocated greater regulation of how tech companies use big data and stronger privacy protections for individuals, and urged that regulation should happen sooner rather than later in the case of AI. “We need a deeper debate about which tasks need to stay in the hands of the public and out of the market,” she said. Reid Hoffman, co-founder of LinkedIn, talked about his concept of “blitzscaling,” a set of techniques learned at Silicon Valley companies to develop innovations quickly. Hoffman said this should happen in AI simultaneously with a sense of ethics and responsibility. “We need a deeper debate about which tasks need to stay in the hands of the public, and out of the market.” For example, when fast-growing companies plan for the future and quickly build up their engineering or sales capabilities, they also need to anticipate risk and what could go wrong on the road ahead. This means hiring people who understand risk and ethics, and developing a risk framework for the company that is combined with a sense of ethics. In the area of healthcare and disease, DJ Patil, the head of technology for Devoted Health, noted how AI holds tremendous promise for treating people and saving lives: “We need to go at maximum warp speed to help those people.” The challenge is how to bring those cures and treatments to market quickly while also adhering to the necessary healthcare safeguards and ethical sensibilities. Patil also called for more cooperation on data sharing around the world. “We have climate change, the potential for pandemics. What we need is better international frameworks, treaty mechanisms to share data across regional lines so that we can actually work on human problems.” In an AI and geopolitics breakout session, led by Amy Zegart, a senior fellow at the Freeman Spogli Institute for International Studies and at the Hoover Institution, panelists analyzed the nature of artificial intelligence; its role in national security, intelligence, and safety systems; and how it may affect strategic stability—or instability. On the latter, Colin H. Kahl, co-director of Stanford’s Center for International Security and Cooperation, raised concerns about whether AI would increase economic tensions among the world’s most powerful nations and alter the global military balance of power if some countries move ahead quickly on AI while others fall behind. Another concern he mentioned was the possibility of someone using AI-enabled cyberweapons against nuclear command-and-control centers. Zegart added that machine learning can help lighten the cognitive load when intelligence specialists are analyzing and sifting through data, which today is being produced at an accelerated rate. The challenge is organizational, as bureaucracies are slow to adopt game-changing technology. First published Nov. 7, 2019, on Insights by Stanford Business. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Clifton Blue Parker is a writer, content strategist, and senior consultant at Clifton B. Parker Media in San Carlos, California. Parker served 5 years as director of communications at Stanford University and 12 years as the senior public information representative at UC Davis.Exploring the Human Side of Artificial Intelligence
‘We need a deeper debate about which tasks need to stay in the hands of the public, and out of the market.’
Expanding human experience
Ethics, geopolitics, and diversity
Marietje SchaakeAI and national security
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Clifton B. Parker
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.