Our PROMISE: Our ads will never cover up content.
Our children thank you.
Dave Gilson
Published: Wednesday, October 26, 2022 - 11:02 Like most of us, lawyers think they can be impartial when they rate other people’s work. “They say, ‘Who writes a brief doesn’t matter. A brief is a brief; it stands on its own merit,’” explains Lori Nishiura Mackenzie, the lead strategist for diversity, equity, and inclusion at Stanford Graduate School of Business. She cites an experiment in which 60 law firm partners were given a legal memo peppered with errors. All were told that a young lawyer had drafted it. Half were told that the writer was white; the other half were told he was Black. When the partners’ evaluations of the memo came back, the imaginary “white” lawyer received an average score of 4.1 out of 5 and was judged a “generally good writer.” The “Black” lawyer got a 3.2 and was deemed “average at best.” Even when we think we’re being objective, biases can creep in. So how can we be more consistent and fair when we evaluate candidates and co-workers? Mackenzie offered some ideas in “The Myths and Rituals of Inclusion,” a talk she gave last spring. A starting point, she says, is to be aware of how we shift our criteria for people based on irrelevant assumptions. “If you start by thinking carefully about how you’re going to evaluate someone before you do, you’re less likely to shift.” Those shifts may be subtle, but can skew outcomes. “Sadly, this happened to me once at work,” Mackenzie recalls. A strong applicant for a position was penalized for misspellings in their cover letter. “I didn’t say, ‘Did you equally check for spelling mistakes in all the candidates?’ Because if we had, I’m sure we would have found a similar number.” An easy way to hold everyone to the same standard is to use a written framework or rubric for assessment. “If you in your work are making decisions about people without some sort of scorecard, likely you are making these shifts,” Mackenzie says. “While bias thrives in ambiguity, consistency has a chance in blocking biases in decision making—and that is good for everyone.” First published Oct. 4, 2022, on Insights by Stanford Graduate School of Business. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Dave Gilson is a senior editor at Stanford Graduate School of Business, where he edits coverage of faculty research, profiles of alums and professors, and Stanford Business magazine. Previously, he was the deputy editor at Mother Jones and taught an introductory reporting class at the University of California-Berkeley Graduate School of Journalism.Use a Scorecard to Evaluate People More Fairly
A written framework is an easy way to hold everyone to the same standard
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Dave Gilson
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Comments
Trying to rate people instead of the system?
Didn't Deming teach us that rating people rather than the system they work within is destructive?
He said again and again that the annual evaluation should be abolished but here is a quality publication pushing better methods to do something that is a bad idea. Why?
We should focus on improving the system.
Trying to rate people instead of the system?
People represent the system. It's not what you measure in the system, but what you are going to do with the measurements.