Featured Product
This Week in Quality Digest Live
Innovation Features
Jill Roberts
Another way to know what’s too old to eat
Gregory Way
Drug designers are rethinking their response to medications that affect multiple targets
Del Williams
8-in. cable and disc systems are comparable to belt or bucket systems
Edmund Andrews
For creative collaboration, sometimes you can’t beat a face-to-face meeting
Steven Brown
21st-century standard candles at NIST

More Features

Innovation News

More News

Innovation

AI: A World of New Opportunity and Risk

A new toolkit for C-suite execs to responsibly adopt artificial intelligence

Published: Wednesday, February 9, 2022 - 13:01

This isn’t a new story: A novel technology disrupts society, bringing with it many benefits but also major risks and costs. We saw it during the Industrial Revolution, which vastly improved the average living standard but also led to poor labor conditions and environmental degradation, all within a timeline that was difficult to foresee.

And here we are now at the dawn of the AI revolution. This time, cloud computing and computer processing power, cheap storage, new algorithms, as well as new product and service innovations, are poised to bring about driverless cars, virtual reality, AI-enabled medical diagnostics, and predictive machine maintenance.

In tandem with the positive technological breakthroughs, however, we also see some negative, often unintended consequences of these technologies. They run the gamut of fake news and algorithms that favor the incendiary and divisive over the factual, to major privacy breaches and AI models that discriminate against minority groups or even cost human lives.

AI is a powerful tool, and it’s never been more important for C-suite executives to understand both how to leverage it for growth and innovation, and how to do so responsibly and ethically. They must understand the long-term impact—both positive and negative—of the algorithms they build and deploy. It’s by no means a charted path; success is as much about asking the right questions, keeping an open mind, and being aware of the key issues at stake, as it is about finding the “right” answers.

The World Economic Forum (WEF), with supporting research from INSEAD’s Hoffmann Global Institute for Business and Society, has created a guide for C-suite executives who are committed to adopting AI technologies effectively and responsibly. This guide consists of questions that executives should be asking themselves as they build their AI capabilities. It also offers some possible answers to these complex issues.

Building an effective AI capability

Building an AI capability that delivers business value is a challenge in its own right. Gartner predicts that 80 percent of analytic insights will fail to deliver business value at scale in 2022. It’s tempting for executives to believe that AI will magically deliver new revenue streams or efficiency gains, but the truth is that AI initiatives should undergo the same rigorous business planning as any other project.

An AI initiative should, first and foremost, align with the organization’s key strategic goals and directly contribute to moving the key performance indicators (KPIs) that buttress this strategy. In other words, executives must “know the why” for AI initiatives. An iterative development that starts with simple, explainable models is recommended. Investments in more complex solutions that may deliver marginal accuracy gains but are much harder to interpret or deploy at scale should be avoided.

The best rules of thumb to mine business value from AI are “Don’t get caught in the hype” and “Start simple and test value.” As Eugene Yan, a data scientist at Amazon, famously said, “The first rule of machine learning: Start without machine learning.”

It’s helpful for executives to possess a broad understanding of the key stages of an AI initiative as well as the technical risks at each of these stages. For instance, executives broadly underestimate the amount of data cleansing and preparation that are required for building viable algorithms. Data scientists, on the other hand, are likely to focus on building the most accurate model possible using the latest techniques, without understanding the business context and many trade-offs.

The most successful AI initiatives are close collaborations between data talent, business stakeholders and sponsors, engineers, and end users who can test the solution and send feedback. Building the right team and upskilling the organization to enable this kind of collaboration is essential for success.

One sure-fire sign that an organization has evolved along the data and AI maturity scale is the shift from a defensive data capability—i.e. primarily focused on reporting on and understanding the past—to an offensive one that’s focused on how data and AI can be used to set strategy, deliver profit, and support innovation.

Another key sign is that the data talent becomes increasingly specialized, moving from generalists to technical specialists (e.g. data analysts, data scientists, machine learning engineers) and also business partner specialists (i.e. data scientists gain expertise in a specific commercial area, such as marketing attribution or pricing analytics). No matter the degree of specialization, the most effective AI teams won’t have a fixed structure; their structure evolves with the changing needs of the business.

Managing AI risks

Running an effective AI capability, however, is more than simply leveraging these technologies to realize earnings before interest and tax (EBIT) and market share gains. Now more than ever, executives must have a keen understanding of the new business risks involved in developing algorithms. They must ensure that their organizations are proactively mitigating risks, and that they comply with upcoming regulations.

The list of potential risks can appear daunting, from algorithms that lead to more severe prison sentences given to minority defendants due to biases in the training data to job losses and “winner takes all” economic models due to increased automation, or even risks to democracy itself due to the polarizing effects of the algorithms used to promote content on social media and create an unsafe online space.

AI also raises questions of accountability. Who is responsible:
• When a driverless car crashes?
• In a lawsuit claiming unfair hiring informed by AI algorithms?
• When the wrong medical treatment is prescribed because an AI diagnostic system contained errors?
• For a large financial loss incurred by an algorithmic trading platform?

The organizations that mitigate these risks best are those that build on their own ethical standards and gateways throughout the AI lifecycle, from how they collect and prepare data to how they build, test, and deploy models. They will adopt new data and AI risk management practices, processes, and tools to both comply with upcoming regulations and ensure customer trust.

For example, one major North American bank used various techniques to de-bias its data to ensure that its credit-scoring algorithm would automatically grant credit to all eligible applicants and not exclude minority groups, who were potentially less represented in the underlying data. Although the technical team carried out the de-biasing techniques on the ground, it was the executive team’s commitment to investing in ethical AI that ensured the robustness of this process.

The opportunity to use AI to grow and innovate has never been greater, but neither have the risks. It’s a long road to leverage these technologies profitably, ethically, safely, and at scale. The new WEF toolkit is a starting point for engaging in the right debates to ensure that executives consider the salient issues in their decision making and their organization’s ways of working. Of all the skills an AI-ready executive team must possess, asking the right questions is probably the most important.

First published Jan. 17, 2022, on INSEAD’s Knowledge blog.

Discuss

About The Authors

Theodoros Evgeniou’s picture

Theodoros Evgeniou

Theodoros Evgeniou is a professor of decision sciences and technology management at INSEAD, and academic director of INSEAD elab, the research and analytics center of INSEAD that focuses on data analytics for business.

Caroline Zimmerman’s picture

Caroline Zimmerman

Caroline Zimmerman is a research associate at INSEAD, researching data leadership and how organizations can better link their data capabilities to business outcomes.