Featured Product
This Week in Quality Digest Live
Innovation Features
David L. Chandler
Greater sensitivity and speed for industrial inspection, airport security, and communications
Anton Ovchinnikov
The push-pull effects of customization
Robert Zaruda
Connecting education and industry to create careers
Christopher Dancy
Not binding, not enforceable, but a step in the right direction
Scott Ginsberg
A balancing act between getting new technology and bringing it to your people

More Features

Innovation News
High-performance polymers for use in high-temperature range developed for 2PP 3D printing
Technique may enable energy-efficient 3D printing of blades for gas turbines or jet engines
Easily synthesized chemical filter could stop greenhouse gas from reaching atmosphere
New medical product from Canon’s Video Sensing Division
Features include flexible installation and fast measurement
Guidance for deploying digital twins safely and securely
Electric power is replacing diesel and hydraulics in construction industry

More News

Innovation

Better Human-AI Collaboration May Depend on Workflow Design

Improving how humans work with algorithms could simply be a matter of redesigning workflow

Published: Thursday, September 1, 2022 - 11:02

How should humans collaborate with artificial intelligence? This is a question of increasing urgency as AI becomes pervasive in the workplace. From screening job applications and chatting with customers to assessing investment portfolios, algorithms are working alongside us in myriad roles and organizational setups. But whether this collaboration is designed in ways that lead to trust and satisfaction—for us humans at least—is another story.

Respecting, rather than ignoring, human concerns about working with AI is not only consistent with humanistic values, as we noted in an earlier article, but also good for business. That’s why we ran the “Bionic Readiness Survey” to investigate what configurations of collaboration with AI algorithms humans are more or less likely to trust.

Based on responses from 257 participants so far (predominantly rank-and-file employees at large organizations in India), we found that people’s trust in AI and clarity about their own work role varied according to how their work is set up.

Not all configurations are equal

Respondents to our survey were randomly assigned to one of six different workflow configurations, illustrated in the graphic below. The scenarios (more information here) are permutations of whether employees work in parallel or in sequence, are specialized or not, and whether the final decision is taken by humans or algorithms.

Some respondents were shown the same scenarios but with a human co-worker instead of an algorithm. All respondents were asked to rate their clarity about their own role and trust in the co-worker (human or AI).

The survey turned up three main findings:

First, respondents indicated that they would be clearest about their own role if they worked in parallel with the algorithm and specialized in a different task (see Configuration 2 above). That’s not very surprising: Role clarity tends to be directly related to how much independence a human employee has at work.

Second, respondents reported trusting the algorithm to almost the same degree across all configurations except in Configuration 3, in which human and AI work in sequence doing more or less the same thing, but with the human making the final decision. Unfortunately, this trust-sapping setup is quite common in real-life applications. Just putting the human in the role of second-guessing the AI seems to lower human trust in the AI.

Third, the way in which organizational setup work appears to affect employee trust as much as—or in some cases, more than—who they work with. In Configuration 3, for instance, our respondents indicated they would trust an AI or a human colleague equally. In other words, if we compared Configuration 3 with one of the other configurations, we might erroneously conclude that humans trust AI less than they trust human colleagues.

Get the configuration right

Put simply, our results to date show that human trust in AI as a colleague may be too broad a line of inquiry. A more fruitful one might be the optimal setup for human-AI collaboration. Whereas past research on human-AI collaboration tends to focus on the reliability of the algorithm, ours is perhaps the first to examine the role of workflow. And it appears that human distrust of AI at work may be due to workflow design rather than human vs. machine rivalry.

First published Aug. 8, 2022, on INSEAD Knowledge

Discuss

About The Authors

Phanish Puranam’s picture

Phanish Puranam

Phanish Puranam is the Roland Berger chaired professor of strategy and organization design at INSEAD. He is also the academic director of INSEAD’s Ph.D. program.

 

Ruchika Mehra’s picture

Ruchika Mehra

Ruchika Mehra is a research associate at INSEAD.