Featured Product
This Week in Quality Digest Live
Management Features
Gleb Tsipursky
The problem is a lot more complex than you think
Rita Men
A survey shows people tend to trust their employers more than governments or the media
Dirk Dusharme @ Quality Digest
Cloud-based eQMS solutions provide quality professionals with the data they need when they need it
Kate Zabriskie
Strategies to retake control, push for greater accountability, and regain control of your sanity
Mike Figliuolo
It’s easy for your team to get sidetracked if your strategy has a lot of moving parts

More Features

Management News
Tech aggravation can lead to issues with employee engagement, customer experience, and business results
Harnessing the forces that drive your organizations success
Free education source for global medical device community
New standard for safe generator use created by the industry’s own PGMA with the assistance of industry experts
Provides synchronization, compliance, traceability, and transparency within processes
Galileo’s Telescope describes how to measure success at the top of the organization, translate down to every level of supervision
Too often process enhancements occur in silos where there is little positive impact on the big picture
Latest installment of North American Manufacturing Covid-19 Survey Series shows 38% of surveyed companies are hiring
How to develop an effective strategic plan and make the best major decisions in the context of uncertainty and ambiguity

More News

Jeffrey Hirsch


Worker-Protection Laws Aren’t Ready for an Automated Future

Without adequate regulation, technology can increase discrimination, surveillance, and personal analytics

Published: Wednesday, September 25, 2019 - 12:02

Science fiction has long imagined a future in which humans constantly interact with robots and intelligent machines. This future is already happening in warehouses and manufacturing businesses. Other workers use virtual or augmented reality as part of their employment training, to assist them in performing their jobs or to interact with clients. And lots of workers are under automated surveillance from their employers.

All that automation yields data that can be used to analyze workers’ performance. Those analyses, whether done by humans or software programs, may affect who is hired, fired, promoted, and given raises. Some artificial intelligence (AI) programs can mine and manipulate the data to predict future actions, such as who is likely to quit their job, or to diagnose medical conditions.

If your job doesn’t currently involve these types of technologies, it likely will in the very near future. This worries me, a labor and employment law scholar who researches the role of technology in the workplace, because unless significant changes are made to U.S. workplace laws, these sorts of surveillance and privacy invasions will be perfectly legal.

New technology disrupting old workplace laws

The United States’ regulation of the workplace has long been an outlier among much of the world. Especially for private, nonunionized workers, the United States largely allows companies and workers to figure out the terms and conditions of work on their own.

In general, for all but the most in-demand workers or those at the highest corporate levels, the lack of regulation means companies can behave however they want—although they are subject to laws preventing discrimination, setting minimum wages, requiring overtime pay, and ensuring worker safety.

But most of those laws are decades old and are rarely updated. They certainly haven’t kept up with technological advances, the increase in temporary or “gig” work, and other changes in the economy. Faced with these new challenges, the old laws leave many workers without adequate protections against workplace abuses, or even totally exclude some workers from any protections at all. For instance, two Trump administration agencies have recently declared that Uber drivers are not employees, and therefore not entitled to minimum wage, overtime, or the right to engage in collective action such as joining a union.

Emerging technologies like AI, robotics, virtual reality, and advanced monitoring systems have already begun altering workplaces in fundamental ways that may soon become impossible to ignore. That progress highlights the need for meaningful changes to employment laws.

Consider Uber drivers

Like other companies in what has been called the “gig economy,” Uber has spent considerable amounts of money and time litigating and lobbying to protect regulations classifying its drivers as independent contractors rather than employees. Uber set its fifth annual federal lobbying record in 2018, spending $2.3 million on issues including keeping its drivers from being classified as employees.

The distinction is a crucial one. Uber does not have to pay employment taxes—or unemployment insurance premiums—on independent contractors. In addition, nonemployees are completely excluded from any workplace protection laws. These workers are not entitled to a minimum wage or overtime; they can be discriminated against based on their race, sex, religion, color, national origin, age, disability, and military status; they lack the right to unionize; and they are not entitled to a safe working environment.

Companies have tried to classify workers as independent contractors ever since there have been workplace laws, but technology has greatly expanded companies’ ability to hire labor that blurs the lines between employees and independent contractors.

Employees aren’t protected, either

Even for workers who are considered employees, technology allows employers to take advantage of the gaps in workplace laws like never before. Many workers already use computers, smartphones, and other equipment that allows employers to monitor their activity and location, even when off duty.

And emerging technology permits far greater privacy intrusions. For instance, some employers already have badges that track and monitor workers’ movements and conversations. Japanese employers use technology to monitor workers’ eyelid movements and lower the room temperature if the system identifies signs of drowsiness.

Another company implanted RFID chips into the arms of employee “volunteers.” The purpose was to make it easier for workers to open doors, log in to their computers, and purchase items from a break room, but a person with an RFID implant can be tracked 24 hours a day. Also, RFID chips are susceptible to unauthorized access or “skimming” by thieves who are merely physically close to the chip.

No privacy protections for workers

The monitoring that’s possible now will seem simplistic compared to what’s coming: a future in which robotics and other technologies capture huge amounts of personal information to feed artificial intelligence software that learns which metrics are associated with things such as workers’ moods and energy levels, or even diseases like depression.

One healthcare analytic firm, whose clients include some of the biggest employers in the country, already uses workers’ internet search histories and medical insurance claims to predict who is at risk of getting diabetes or considering becoming pregnant. The company says it provides only summary information to clients, such as the number of women in a workplace who are trying to have children, but in most instances it can probably legally identify specific workers.

Except for some narrow exceptions—like in bathrooms and other specific areas where workers can expect to be in relative privacy—private-sector employees have virtually no way, nor any legal right, to opt out of this sort of monitoring. They may not even be informed that it is occurring. Public-sector employees have more protection, thanks to the Fourth Amendment’s prohibition against unreasonable searches, but in government workplaces, the scope of that prohibition is quite narrow.

AI discrimination

In contrast to the almost total lack of privacy laws protecting workers, employment discrimination laws, while far from perfect, can provide some important protections for employees. But those laws have already faced criticism for their overly simplistic and limited view of what constitutes discrimination, which makes it difficult for victims to file and win lawsuits or obtain meaningful settlements. Emerging technology, particularly AI, will exacerbate this problem.

AI software programs used in the hiring process are marketed as eliminating or reducing biased human decision-making. In fact, they can create more bias because these systems depend on large collections of data, which can be biased themselves.

For instance, Amazon recently abandoned a multiyear project to develop an AI hiring program because it kept discriminating against women. Apparently, the AI program learned from Amazon’s male-dominated workforce that being a man was associated with being a good worker. To its credit, Amazon never used the program for actual hiring decisions, but what about employers who lack the resources, knowledge, or desire to identify biased AI?

The laws about discrimination based on computer algorithms are unclear, just as other technologies stretch employment laws and regulations well beyond their clear applications. Without an update to the rules, more workers will continue to fall outside traditional worker protections—and may even be unaware how vulnerable they really are.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


About The Author

Jeffrey Hirsch’s picture

Jeffrey Hirsch

Jeffrey Hirsch is a Geneva Yeargan Rand Distinguished Professor of Law at the  University of North Carolina at Chapel Hill.