Featured Product
This Week in Quality Digest Live
Customer Care Features
Etienne Nichols
How to give yourself a little more space when things happen
Jennifer Chu
Findings point to faster way to find bacteria in food, water, and clinical samples
NIST
Smaller, less expensive, and portable MRI systems promise to expand healthcare delivery
William A. Levinson
Automation could allow baristas to be paid more and still net higher profits for company
Peter Fader
In an excerpt from The Customer-Base Audit, the authors ask critical questions

More Features

Customer Care News
Three new single-column models with capacities of 0.5 kN, 1 kN, and 2.5 kN
Recognition for configuration life cycle management
Delivers real-time, actionable 3D data across manufacturing and business operations
On the importance of data governance in the development of complex products
Base your cloud strategy on reliable information
Forecasts S&A subsector to grow 9.2% in 2023
Facilitates quick sanitary compliance and production changeover
First module of the 2023 PLM MAR Series
Features custom pricing, purchase history, and simplified orders and payment

More News

Klaus Wertenbroch

Customer Care

How to Appease Your Customers After Your Algorithm Rejects Them

As algorithms increasingly become gatekeepers, where should rejected customers turn for an explanation?

Published: Tuesday, December 1, 2020 - 13:02

From a customer perspective, the only thing more frustrating than being denied a product or service is when that denial comes without a satisfactory explanation. As humans, our ability to deal with disappointment depends on understanding why it happened. Without an acceptable rationale, we’re apt to assume the worst: deliberate disrespect, and blind prejudice.

This aspect of consumer psychology may create problems for companies relying on decision-making algorithms for vetting purposes, fraud prevention, and general customer service. We’re seeing widening adoption of AI in fields such as marketing and financial services. On balance, this is great news, allowing companies to serve customers with unprecedented speed and predictive precision. However, while bots beat humans hands down at making accurate decisions at scale, their communication skills (so far, anyway) leave much to be desired. As algorithms assume a more prominent role as gatekeepers, where will rejected customers turn for an adequate explanation? And how can companies provide one without revealing too much about their proprietary algorithms—which are, very often, essential IP?

Too many firms haven’t yet thought seriously about these questions, but policymakers have. Articles 13 to 15 of the EU’s General Data Protection Regulation require that companies using automated decision making supply customers with “meaningful information about the logic involved.” Determining what qualifies as “meaningful information” is slippery enough for commonplace decision-tree algorithms. As more sophisticated tools such as “deep learning” neural networks gain wider business application, the byzantine processes of the algorithms themselves may defy explanation.

Our recent working paper (co-authored with Hisham Abdulhalim of Ben-Gurion University of the Negev) suggests that companies can, and should, be more transparent with users both when they don’t want to reveal, for commercial or legal reasons, how an algorithm operates, and when they can’t reveal it because the algorithm is unexplainable to laypeople due to its complexity. Based on one of the few field experiments ever conducted into the explainability of algorithms as well as several lab studies, we find that information about the purpose or goal of an algorithm (which researchers call a “teleological” explanation) can be just as meaningful to rejected consumers as knowing how it works (a so-called “mechanistic” explanation).

Explanations and e-commerce

We partnered with an e-commerce platform that uses algorithms to decide whether transactions should be completed. In particular, we focused on an algorithm that decides whether buyers have sufficient funds in their account. So-called “elite users,” whom the algorithm deems highly trustworthy based on past purchase data, may be permitted to proceed on the presumption that they will promptly top up.

For every seventh denied purchase out of a sample of 16,399 declined transactions (average amount: approximately $164), we enriched the uninformative standard message provided to customers (“Company has blocked this purchase. Company blocked the purchase due to customer-related issues.”) by adding: “Company blocks such purchases to ensure the financial well-being of our customers.”

Our aim in adding this simple teleological explanation was to assess its impact on customer behavior. We reasoned that, without an explanation, users’ immediate remedy for the sting of rejection would be to raise an inquiry with customer support. In fact, every single one of the rejected customers who received the baseline message did so. In contrast, those who were told the purpose of the decision were 7.4 percent less likely to complain to customer support—our first indication that such an explanation made rejection easier to accept.

Beyond that, the average resolution time for the resulting customer service inquiries (i.e., the total time elapsed until an inquiry was closed) was nearly two hours shorter for the group that was told the aim of the decision. This suggests that our brief explanatory statement of purpose was effective at reducing the rejectees’ negative emotional responses to more manageable levels without increasing the expected workload for customer support. What’s more, we also found that purchase-completion rates didn’t drop among those who received this explanation, even though they were less likely to contact customer support.

It’s surprising how a simple, cost-free intervention—explaining the purpose behind a decision even in a nonspecific way—can impact customer behavior to the benefit of both customers and company.

Second chances

Mechanistic explanations (related to how a decision is made) have one big advantage over teleological ones, though: They give rejected consumers a clearer clue as to what they can do differently next time. In a subsequent online experiment, we found that when participants were told instantaneously (presumably by an algorithm) where they went wrong in a visual perception test and were given an opportunity to redo the test, they not only were more likely to use their second chance but also found the experience more satisfying—compared to those given no explanation or a general teleological explanation. However, in the absence of a second chance, participants found both types of explanations equally satisfying, not to mention preferable to no explanation at all.

Digging deeper

Next, we investigated why these two very different varieties of explanation are equally psychologically satisfying when consumers can’t remedy a service denial. Our hypothesis was that users tend to perceive them as equally fair. Using the same visual perception test setup, we added a surprise set of questions to the end of the experiment, framed as extra work, offering one of three messages: no explanation for the inconvenience, a neutral teleological explanation referring to our scientific aims, or an unfair explanation that stated that certain participants were singled out to take further advantage of their labor without additional pay.  

Unsurprisingly, the neutral explanation was seen as more satisfying than the unfair one. The more counterintuitive finding was that even the unfair one was preferable to none at all.

In a fourth and final experiment, we varied the explanations for the extra questions. All three explanations provided a teleological “why” for the extra work but were paired with a straightforward mechanistic explanation of how the algorithm selected some users over others, an opaque one mentioning “a complicated black-box algorithm which cannot be explained,” or no mechanistic explanation at all. Participants found the black-box explanation the least satisfying and fair of the three. Interestingly, the teleological-only and straightforward mechanistic explanations were rated as equally fair and satisfactory—despite the highly specific content of the latter and the relative vacuity of the former.

Ethical ambiguities

We are aware that our research raises potential ethical questions. Our findings suggest that companies need not explain how their algorithms work in detail to satisfy rejected customers—an explanation focused on the goal of the algorithm seems to suffice. This might offer less forthcoming firms a transparency workaround. However, it could also be interpreted as providing more flexible ways to achieve transparency.

After all, the finding that comes out most strongly from our studies is that offering an explanation that conveys a sense of purpose and fairness about the algorithm’s decision is better than giving no explanation at all. Sometimes, it’s as effective as explaining the details of how an algorithm works. This should reassure companies that their users are responsive to communications that honor the need for fairness, even after being rejected by an algorithm. Using a black-box, unexplainable algorithm, therefore, is no excuse to ignore customers’ need for an explanation. As ever, the human touch is all important. And as our research shows, this comes at no cost to companies.

First published Nov. 3, 2020, on INSEAD’s Knowledge blog.

Discuss

About The Author

Klaus Wertenbroch’s picture

Klaus Wertenbroch

Klaus Wertenbroch is the Novartis Chaired Professor of Management and the Environment, and a professor of marketing at INSEAD. He is the launching editor-in-chief of the European Marketing Academy’s (EMAC) Journal of Marketing Behavior and directs the INSEAD Strategic Marketing Program.