Featured Product
This Week in Quality Digest Live
Health Care Features
Prashant Yadav
How to build resilient healthcare supply chains
David Stevens
Tracking your assets is critical to patient safety
Kari Miller
An effective strategy requires recruiting qualified personnel familiar with the process and technology
William A. Levinson
People can draw the wrong conclusions due to survivor, survey, and bad news bias.
The Un-Comfort Zone With Robert Wilson
Here’s how to control negative self-talk

More Features

Health Care News
MIT course focuses on the impact of increased longevity on systems and markets
Delivers time, cost, and efficiency savings while streamlining compliance activity
First responders may benefit from NIST contest to reward high-quality incident command dashboards
Enhances clinical data management for medtech companies
Winter 2022 release of Reliance QMS focuses on usability, mobility, and actionable insights
The tabletop diagnostic yields results in an hour and can be programmed to detect variants of the SARS-CoV-2 virus
First Responder UAS Triple Challenge focuses on using optical sensors and data analysis to improve image detection and location
Free education source for global medical device community
Extended validation of Thermo Scientific Salmonella Precis Method simplifies workflows and encompasses challenging food matrices

More News

Maggie Overfelt

Health Care

OK, Computer: Building Trust to Make Machine Learning More Effective

Asking users about their needs and concerns can open up the ‘black box’ of new workplace technologies

Published: Tuesday, March 8, 2022 - 13:03

Machine learning has the potential to drastically improve efficiency and the quality of care in hospitals by tackling hard-to-predict problems like ICU occupancy or which patients are likely to be readmitted.

Yet, a big barrier to any technology working optimally is getting full buy-in from its users, particularly when they’re busy medical workers who rely on good information to make split-second decisions that affect their patients’ health. Machine learning tools, which use artificial intelligence to improve the accuracy of their analysis, may be seen with skepticism.

“Machine learning is a technology that’s not well understood and thus not well trusted,” says Sara Singer, a professor of organizational behavior (by courtesy) at Stanford Graduate School of Business, and a professor of medicine at Stanford University School of Medicine. “People describe it as a black box—they feel like they don’t have input into how it’s used. It has an ability to add value, but only if we can create trust.”

Westchester Medical Center Health Network in suburban New York has found a way to smoothly roll out machine learning tools, which Singer and her co-authors document in a recent paper in Health Care Management Review. By deploying a collaborative back-and-forth process between tech developers and hospital staff, two of the network’s hospitals recently built highly accurate, predictive tools that feature input—and acceptance—from both sides.

“This work shows improvement to the design of the machine learning tools based on user input in real-world settings, which just hasn’t been done before,” Singer says.

Seeking user input

Typically, technology development is one-directional: Developers create a tool, and users must adapt to it. A developer-driven, user-unfriendly product may fail to be integrated seamlessly into the workplace because users don’t use it effectively or just don’t use it at all. In showing how Westchester Medical Center fostered two-way conversations between software engineers and hospital staff, Singer and her colleagues offer an approach that might be applied in a wide range of settings.

Singer’s team—which included Katherine Kellogg of the MIT Sloan School of Management; Ari Galper, a Ph.D. student at Columbia University, and Deborah Viola, vice president of data management and analytics at Westchester Medical Center—conducted interviews with developers tasked with building two apps that used machine learning: the Low Bed Tool, which aims to improve the accuracy and efficiency of predicting bed availability in ICUs and other units; and the Readmission Risk Tool, which helps clinical staff to identify patients most at risk for returning to the hospital soon after discharge.

Over a year, the researchers conducted 37 half-hour interviews with six developers and seven hospital staffers who used the new tools. They identified granular examples of developers soliciting specific feedback from hospital staff and then using that information to address users’ needs and concerns.

“Machine learning is a technology that’s not well understood and thus not well trusted. It has an ability to add value, but only if we can create trust.”
—Sara Singer

For example, when it became clear that the first iteration of the Low Bed Tool was unable to predict sudden patient influxes in the emergency department, the developers pivoted and found a new user base within the hospital. “We were stymied—emergency department surges are usually associated with unanticipated emergencies or disasters, and there was no way we could predict those,” a developer told the researchers.

The next version of the Low Bed Tool targeted clinicians in clinical care management and utilization management. At first, they weren’t convinced that the data from the new version of the app were accurate. “It still seemed a little off; I couldn’t put my finger on it,” one hospital staffer reported. “After years of doing what I do, I just have a gut feel. So the developers went back and looked into some things, explained the data issues that were coming up between different departments, and we made decisions about what we wanted and worked it all out.”

Trust exercises

Today, the Low Bed Tool helps clinicians manage patient flow by predicting the likelihood of bed availability in four hospital units. It provides a three- and five-day forecast of capacity and occupancy, and updates daily to adjust for changes in patient trendlines, seasonality, and other patterns. Since it was adopted, users have reported shorter wait times and improved capacity. Pediatric units using the tool, one hospital executive said, “never have issues [with capacity] except at flu season.”

To get the tool to this point, the developers not only had to tweak their initial product, they also had to adjust the way they communicated with nontech people. “In any kind of group where you have people with very different professional backgrounds, it can be hard for one side to hear from someone sitting across the table about what they’ve done wrong, or what they haven’t considered,” Singer says. “Trust came over time. If a developer listened to what the user was saying, the user gained trust in what developers gave back to them. It was remarkable in the way that it enabled users to then say, ‘I need to consider this.’”

This collaborative framework could be extended to other hospital settings, Singer notes. For example, machine learning could be used to help determine surgical room capacity or predict which patients might be at greatest risk for hospitalization from Covid-19.

There are also many business and civic settings where a better way to develop and implement machine learning technology would be valuable. “It’s important to realize that technology isn’t static,” Singer says. “User input is critical for anyone developing new technology, as is the idea that both developers and users should understand the benefits of working together.”

First published October 26, 2021, on Stanford Graduate School of Business “Insights” blog.

Discuss

About The Author

Maggie Overfelt’s picture

Maggie Overfelt

Maggie Overfelt is a New York-based freelance journalist who covers technology, small business, personal finance, and a wide range of other business and social themes. Her work has appeared with CNBC.com, Crain’s New York Business, TheStreet, Fortune Small Business magazine, and PC magazine.