Selection of job candidates just? Economy brings risk of standardization

by Andrea
0 comments

According to the World Economic Forum, more than 90% of employers use automated systems to filter or classify job applications, and 88% of companies already employ some form of for the initial screening of candidates. The Unilever consumer goods giant, for example, uses Hirevue AI -based tools to evaluate early career candidates, saving 50,000 hours and more than $ 1 million in the process.

Most companies, when considering AI evaluation tools, focuses on efficiency and quality gains that these tools bring. But what they do not consider is how AI assessment can change candidates’ behavior during the process.

A new research, which analyzed over 13,000 participants from 12 studies, reveals that this is a crucial blind point. Simulations of various evaluation situations were studied in both the laboratory and in the countryside, and the study had the collaboration of a startup that offers game -based hiring solutions called Equalture.

Selection of job candidates just? Economy brings risk of standardization

Guarantee up to 55% discount on your ticket at Expert 2025

The results show that the candidates consistently emphasized analytical traits when they believed they were being evaluated by AI, while minimizing very human qualities – empathy, creativity, intuition – which often distinguish exceptional staff from the purely competent.

This has led candidates to present a different and potentially more homogeneous version of themselves, affecting those who would probably succeed in an AI -enabled hiring process, with implications for organizations that use it in hiring, promotions or admission decisions.

Why does this matter to use organization?

Continues after advertising

The implications of these findings go beyond individual hiring decisions. When candidates are systematically distorted, organizations face various critical challenges:

Talent Pool Distortion

Although AI is sometimes guilty of biased hiring decisions (for example, discrimination against women in the selection process), the research suggests that the simple fact that it is being evaluated by IA also generates bias in candidates, making them believe that they should prioritize their analytical capabilities.

As a result, companies may be eliminating exactly the candidates who need it: that innovative thinker or emotionally intelligent leader you are looking for can present themselves as an analyst who follows rules because he believes that this is what he wants to see.

Continues after advertising

Validity Commitment

The evaluation tools are as good as the data they collect. When candidates strategically alter their answers, the fundamental validity of the evaluation process can be compromised. Organizations can stop measuring authentic capabilities – instead they can measure what candidates think I values the most.

Unintentional homogenization

If most candidates believe that AI favors analytical traits, the talent pipeline can become increasingly uniform, potentially damaging initiatives of diversity and limiting the variety of perspectives in organizations. Companies like IBM and Hilton, who are part of both hiring and internal promotion systems, now need to deal with the issue of whether these tools encourage employees to standardized self -presentation.

New transparency regulations, such as the European Union AI Law, which require organizations to disclose AI use in high impact decisions, make these results even more likely. When candidates know that an AI is evaluating them, it is more likely to change their behavior.

Continues after advertising

What leaders can do

Based on our findings, organizations can take various concrete measures to deal with the effect of AI evaluation:

Radical transparency

It is not enough just to disclose that there is AI evaluation – be explicit about what it really evaluates. Clearly communicate that your AI can and values different features, including creativity, emotional intelligence, and intuitive problem solving. This may include providing examples of successful candidates that have shown strong intuitive or creative abilities.

Continues after advertising

Today, few companies seem to be transparent about what exactly AI evaluates – at least this information is not easily accessible on career pages of many large companies. On the other hand, candidates discuss and share their intuitions on blogs and videos, which may be counterproductive because it may or may not be aligned with real practices. We recommend that companies do not let their candidates speculate.

Regular behavioral audits

Implement systematic revisions of AI evaluation results. For example, New York City approved Local Law 144, which requires employers to conduct annual AI -based bias audits. In response, one of the AI -hiring market leaders, Hirevue, reports its recent audits for race or gender bias in different positions and cases of use.

In addition to examining demographic biases, we suggest using these audits to seek standards that indicate behavioral adaptation: Are the answers of the candidates becoming more homogeneous over time? Are you seeing a change for analytical presentations over other valuable traits?

Continues after advertising

Hybrid evaluation

Some organizations combine human and AI evaluations. For example, Salesforce notes that, besides technology, a human reviews the candidacies. Nvidia and Philip Morris guarantee the evaluation and final decision by a human. One of our studies shows that while this hybrid assessment reduces the tendency of candidates to highlight analytical capabilities, it does not eliminate it. To close this gap, it is necessary to train human recruiters to compensate for the effect of AI.

The way forward

As Ia becomes increasingly incorporated into organizational decisions, we must recognize that these tools not only change processes – they change people. AI -assessment efficiency gains can come to the cost of the authentic presentation of candidates and, ultimately, the human diversity that makes organizations innovative and resilient. Irony is striking: in our search to eliminate human bias in hiring, we can have created a system where AI introduces a new form of bias.

The solution is not to abandon AI, but project evaluation systems that consider and counter these behavioral changes. Just keeping humans – not just metrics – at the center of our evaluation strategies can we build hiring systems that really identify and cultivate the diverse talent that our organizations need.

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC