‘Adjunct Managers’
Working people can benefit from union representation in negotiations on how technology is deployed, developed, and implemented. To better understand the role unions can play in improving worker engagement with AI, let’s examine the use of AI in call centers.
The pandemic accelerated the adoption of AI and other digital technologies in call centers as many companies closed traditional in-person facilities and had employees do the work from home. AI now plays two important roles in these remote call center setups and has often been deployed without workers’ consent.
First, AI systems like chatbots and virtual agents are now providing automated customer support, routing calls, and performing other tasks previously handled by experienced staffers who had to develop new skills to move into roles with added responsibilities. When a company displaces workers by deploying AI tools without workers’ feedback or consent, the people whose responsibilities are taken over by AI may be demoralized because they’re left with fewer career options and less agency.
Second, employers are using AI as a monitoring and assessment tool, and these types of deployments have the potential to infringe on worker privacy and perpetuate bias. As call center workers moved out of office spaces—and away from their managers—and into their homes, their employers increased their use of surveillance technologies and monitoring systems for both remote workers and those still in offices, and the tools often used ambiguous metrics and goals to measure productivity.
A New York Times article describing the use of AI monitoring tools at a MetLife call center in Rhode Island described AI as a kind of “adjunct manager” that monitored employees and determined if they talked too quickly, sounded sufficiently energetic, and conveyed empathy during interactions with customers. Such systems of appraisal raise concerns about lack of worker privacy and perpetuation of racial and gender biases. Determining how algorithms reach their conclusions can be difficult, opening the possibility that assessments of worker performance are inaccurate. For example, independent evaluations have shown that AI-based speech recognition tools perform worse for people of color and women of all backgrounds than they do for white men, increasing the likelihood that monitoring tools that use speech recognition technology offer inaccurate performance assessments for those workers.
Bias isn’t necessarily an inevitable outcome of AI. But AI systems can reinforce or replicate biases that have influenced personnel decisions in the past when they’re trained on data samples that don’t reflect the true demographic makeup of the workforce and have limited representation of certain populations, including people of color, women of all backgrounds, workers from low-income backgrounds, and people who are learning English. When those demographic groups are left out of AI’s data samplings, AI fails to account for the diversity of our world. This can be a problem with speech recognition systems, for example, because AI trained exclusively on speech samples of white people may misinterpret or misunderstand the speech patterns, vernacular usages, dialects, and accents of members of other populations.
At some organizations, AI systems may even have the power to hire and fire workers. To avoid the possibility that these systems would make personnel decisions without any human intervention, U.S. Senators Bob Casey of Pennsylvania and Brian Schatz of Hawai‘i introduced a bill called the No Robot Bosses Act of 2023 to curb the rise of AI-driven decision-making systems. The AFL-CIO and its affiliate the Communications Workers of America (CWA), which represents unionized call center workers, both endorse the No Robot Bosses Act.
Labor unions have the power to use contract negotiations with employers to set parameters around when and how workers engage with AI systems, and they’re invested in advocating for federal AI protections.
For example, the New York Times article cited earlier told the story of call center workers who broadly experienced adverse outcomes from the nonconsensual implementations of AI-driven performance monitoring tools and automated systems in their workplaces. Unions can advocate on behalf of workers for better working conditions in situations like that. According to a 2022 study facilitated by Cornell University, call center workers represented by unions reported lower stress levels than their nonunion counterparts. Moreover, call center workers represented by a union reported that their union helped address scheduling predictability, fairness of performance monitoring, and training quality and quantity.
Including workers in decisions about implementations of AI and other automated tools will do more than reduce the potential risks associated with new technologies; worker input could also lead to increased productivity because when labor and management work together, they can come up with practical plans for using technology to help people work more efficiently.