Featured Product
This Week in Quality Digest Live
Customer Care Features
Anju Dave Vaish
Consumers have unleashed their creativity during lockdown. Some clever brands have noticed.
Gleb Tsipursky
Maximize your chances of not only surviving but also thriving in these troubled times
Innovating Service With Chip Bell
Invite your customer to co-create. Their loyalty will soar, along with your reputation and bottom line.
Mark Williams
Take the time to see how you can do what’s right for the majority of people, the majority of the time
Ryan E. Day
Bespoke jewelry company Gemist says yes

More Features

Customer Care News
Good quality is adding an average of 11 percent to organizations’ revenue growth
Chick-fil-A leads; Chipotle Mexican Grill stabilizes
Consolidated Edison posts large gain; patient satisfaction is stable
Partnership for a Cleaner Environment (PACE) program has grown to more than 40 suppliers in 40 countries
Trader Joe’s tops supermarkets; Home Depot overtakes Lowe’s
TVs and video players lead the pack, with internet services at the bottom
AIAG’s director of corporate responsibility comments on impact of new ethics language in upcoming IATF 16949
Good news for Detroit
The Baldrige Criteria for Performance Excellence can help

More News

Ben Brumfield

Customer Care

People Think Robots Are Pretty Incompetent and Not Funny, New Study Says

People think robots bomb as nannies and comedians. OK for package deliverers and tour guides.

Published: Tuesday, July 21, 2020 - 11:02

Dang robots are crummy at so many jobs, and they tell lousy jokes to boot. In two new studies, these were common biases human participants held toward robots.

The studies were originally intended to test for gender bias, that is, if people thought a robot believed to be female may be less competent at some jobs than a robot believed to be male and vice versa. The studies’ titles even included the words “gender,” “stereotypes,” and “preference,” but researchers at the Georgia Institute of Technology discovered no significant sexism against the machines.

“This did surprise us,” says Ayanna Howard, the principal investigator in both studies. “There was only a very slight difference in a couple of jobs but not significant. There was, for example, a small preference for a male robot over a female robot as a package deliverer.” Howard is a professor in and the chair of Georgia Tech’s School of Interactive Computing.

Although robots are not sentient, as people increasingly interface with them, we begin to humanize the machines. Howard studies what goes right as we integrate robots into society and what goes wrong, and much of both has to do with how the humans feel about robots.

I hate robots

“Surveillance robots are not socially engaging, but when we see them, we still may act like we would when we see a police officer, maybe not jaywalking and being very conscientious of our behavior,” says Howard, who is also the Linda J. and Mark C. Smith Chair and Professor in Bioengineering in Georgia Tech’s School of Electrical and Computer Engineering.

“Then there are emotionally engaging robots designed to tap into our feelings and work with our behavior. If you look at these examples, they lead us to treat these robots as if they were fellow intelligent beings.”

It’s a good thing robots don’t have feelings because what study participants lacked in gender bias they more than made up for in judgments against the humanoid robots’ competence. That predisposition was so strong that Howard wondered if it may have overridden any potential gender biases against robots; after all, social science studies have shown that gender biases are still prevalent with respect to human jobs, even if implicit.

In questionnaires, humanoid robots introduced themselves via video to randomly recruited online survey respondents, who ranged in age from their 20s to their 70s and were mostly college educated. The humans ranked robots’ career competencies compared to human abilities, and only trusted the machines to competently perform a handful of simple jobs.

Pass the scalpel

“The results baffled us because the things that people thought robots were less able to do were things that they do well,” Howard says. “One was the profession of surgeon. There are Da Vinci robots that are pervasive in surgical suites, but respondents didn’t think robots were competent enough. Security guards—people didn’t think robots were competent at that, and there are companies that specialize in great robot security.”

Cumulatively, the 200 participants across the two studies thought robots would also fail as nannies, therapists, nurses, firefighters, and totally bomb as comedians. But they felt confident bots would make fantastic package deliverers and receptionists, pretty good servers, and solid tour guides.

The researchers could not say where the competence biases originate. Howard could only speculate that some of the bad rap may have come from media stories of robots doing things like falling into swimming pools or injuring people.

It’s a boy

Despite the lack of gender bias, participants readily assigned genders to the humanoid robots. For example, people accepted gender prompts by robots introducing themselves in videos.

If a robot said, “Hi, my name is James,” in a male-sounding voice, people mostly identified the robot as male. If it said, “Hi, my name is Mary,” in a female voice, people mostly said it was female.

Some robots greeted people by saying “Hi” in a neutral sounding voice, and still, most participants assigned the robot a gender. The most common choice was male followed by neutral then by female. For Howard, this was an important takeaway from the study for robot developers.

“Developers should not force gender on robots,” Howard says. “People are going to gender according to their own experiences. Give the user that right. Don’t reinforce gender stereotypes.”

Social is good

Some in the field advocate for not building robots in humanoid form at all to discourage any kind of humanization, but the Georgia Tech team takes a less stringent approach.

“There is no single one-size-fits-all answer on whether it is appropriate to design robots to look like human beings,” says Jason Borenstein, a co-principal investigator on one of the papers and an ethics researcher in Georgia Tech’s School of Public Policy. “It depends on a variety of ethical considerations and other factors, including whether people might trust a robot too much if it has a human-like appearance.”

“Robots can be good for social interaction,” says Howard, who also defended robots’ comedic talent, provided they are programmed for that. “If you ever go to an amusement park, there are animatronics that tell really good jokes. They could be very helpful in elder care facilities to keep people company. They might also make better nannies than letting the TV babysit the kids.”

Read the studies

The two studies were submitted to conferences that were canceled due to Covid-19.

“Why Should We Gender? The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency” was published in Proceedings of 2020 ACM Conference on Human-Robot Interaction (HRI’20), which appeared in March 2020. “Robot Gendering: Influences on Trust, Occupational Competency, and Preference of Robot Over Human” appeared in CHI 2020 Extended Abstracts (computer-human interaction, DOI: 10.1145/3334480.3382930).

The research was funded by the National Science Foundation and by the Alfred P. Sloan Foundation.

The papers’ co-authors were De’Aira Bryant, Kantwon Rogers, and Jason Borenstein from Georgia Tech. The National Science foundation funded via grant 1849101. The Alfred P. Sloan Foundation funded via grant G-2019-11435. Any findings, conclusions, or recommendations are those of the authors and not necessarily of the sponsors.

First published, May 19, 2020, in the Georgia Tech News Center.

Discuss

About The Author

Ben Brumfield’s picture

Ben Brumfield

Ben Brumfield is the senior science writer at Georgia Institute of Technology, and vice chair of the Georgia Cyber Academy.