Researchers say claims that AI can boost diversity in the workplace are ‘false and dangerous’

0

Recent years have seen the emergence of AI tools marketed as a response to the lack of diversity in the workforce, from the use of chatbots and resume scrapers to align potential candidates, to analysis software for video interviews.

Those behind the technology say it nullifies human biases against gender and ethnicity when recruiting, instead using algorithms that read vocabulary, speech patterns and even micro- facial expressions to assess huge pools of job candidates for the right personality type and “fit culture”.

However, in a new report published in Philosophy and Technology, researchers at the Cambridge Center for Gender Studies argue that these claims make some uses of AI in hiring little better than “automated pseudoscience.” “recalling physiognomy or phrenology: the discredited beliefs that personality can be inferred from facial features or the shape of the skull.

They say it’s a dangerous example of “technosolutionism”: turning to technology to provide quick fixes to deep-rooted discrimination issues that require investment and change in corporate culture.

In fact, the researchers worked with a team of Cambridge computer science students to demystify these new hiring techniques by building an AI tool modeled on the technology, available at: https://personal-ambiguator-frontend. vercel.app/

The ‘Personality Machine’ demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give drastically different personality readings – and therefore could be the difference between rejection and progression for a generation of job seekers vying for graduate positions.

The Cambridge team says using AI to narrow candidate pools may ultimately increase uniformity rather than diversity in the workforce, as the technology is calibrated to seek out the “perfect candidate”. fanciful from the employer.

This could see those with the appropriate training and background “winning over the algorithms” by replicating the behaviors that AI is programmed to identify and adopting those attitudes in the workplace, the researchers say.

Additionally, as the algorithms are refined using past data, they claim that the candidates deemed the best fit will likely end up being those most similar to the current workforce.

“We are concerned that some sellers are wrapping ‘snake oil’ products in shiny packaging and selling them to unsuspecting customers,” said co-author Dr. Eleanor Drage.

“By claiming that racism, sexism and other forms of discrimination can be eliminated from the hiring process using artificial intelligence, these companies reduce race and gender to insignificant data points, rather only to systems of power that shape how we move through the world. ”

The researchers point out that these AI recruiting tools are often proprietary — or “black box” — so how they work is a mystery.

“While the companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” Drage said. “As such, this technology and the way it is marketed could become dangerous sources of misinformation about how recruitment can be ‘unbiased’ and made fairer. »

Despite some backlash – the proposed EU AI law classifies AI-based recruiting software as “high risk”, for example – researchers say tools made by companies such as Retorio and HIreVue are deployed with little regulation, and point to surveys suggesting the use of AI in hiring snowballs.

A 2020 study of 500 organizations from various industries in five countries found that 24% of companies have implemented AI for recruitment purposes and 56% of hiring managers plan to adopt it within the next year.

Another survey of 334 HR leaders, conducted in April 2020 as the pandemic took hold, found that 86% of organizations were integrating new virtual technologies into their hiring practices.

“This trend was already in place at the start of the pandemic, and the accelerated shift to online working caused by COVID-19 is likely to see greater deployment of AI tools by HR departments in the future,” said co-writer Dr. Kerry Mackereth, who presents the Good Robot podcast with Drage, in which the duo explores the ethics of technology.

Covid-19 isn’t the only factor, according to human resources officers the researchers interviewed. “Volume recruiting is increasingly untenable for HR teams desperate for software to reduce costs as well as the number of candidates requiring personal attention,” Mackereth said.

Drage and Mackereth say many companies are now using AI to analyze candidate videos, interpret personality by evaluating regions of a face – similar to lie detection AI – and scoring the “big five”. personality tropes: extroversion, agreeableness, openness, conscientiousness, and neuroticism.

The undergrads behind the “Personality Machine,” which uses a similar technique to expose its flaws, say that while their tool won’t help users beat the algorithm, it will give seekers employment an idea of ​​the types of AI they might be experiencing – perhaps even without their knowledge.

“Too often the hiring process is sideways and confusing,” said Euan Ong, one of the student developers. “We want to give people a visceral demonstration of the kinds of judgments that are now automatically made about them.

“These tools are trained to predict personality based on common patterns in images of people they’ve seen before, and often end up finding spurious correlations between personality and seemingly unrelated properties of the image, like brightness. We have created a toy version of the kinds of models that we think will be used in practice, in order to experience them ourselves,” Ong said.

Share.

Comments are closed.