Issue 01 - 2026MAGAZINETechnology
Artificial Intelligence

AI now decides who gets the job

The artificial intelligence models are trained on historical data, and if that data reflects past discriminatory hiring practices, the AI learns these biases as successful patterns

The use of artificial intelligence (AI) to automate, screen, and select candidates has become the new operational standard for modern business. An estimated 99% of Fortune 500 companies now use some form of automation in their hiring process.

This adoption is accelerating at a breakneck pace, with 2024 seeing a 68.1% increase in the use of AI recruitment tools compared to 2023. A 2024 survey of chief human resources officers by BCG revealed that among companies experimenting with AI or Generative AI, 70% are doing so within the human resources department, and the single most common use case is talent acquisition.

Recruiters report that AI’s primary benefit is that it saves time (67%), and 86.1% state it makes the hiring process faster. The vast majority of firms (92%) report that they are already seeing productivity gains from its implementation.

To understand who gets hired, one must first understand the new technological gatekeepers that stand between an applicant and a human reviewer. The AI hiring market is not a single entity, but rather a fragmented, modular “gauntlet” of specialised tools that manage different stages of the recruitment funnel.

Artificial intelligence-powered tools like Fetcher and HireEZ automate the outreach process, acting as digital scouts that scan social media, professional networks, and talent pools to identify passive candidates who have not even applied for a job. These systems utilise AI to create a talent pipeline before posting a job description.

Once applications are received or candidates are engaged, the next layer of automation takes over. This is the domain of conversational AI assistants, dominated by platforms like Paradox and its chatbot “Olivia.” This AI handles the high-volume, repetitive tasks that consume recruiter time. It provides instant, 24/7 responses to candidate FAQs, pre-screens applicants based on initial criteria, and automates the complex logistics of interview scheduling.

Organisations using such conversational AI have seen a 3x improvement in application completion rates and a 25% rise in candidate satisfaction scores, as the process feels more responsive and “high-touch,” even at high volume.

The most critical and controversial stage is the AI-powered assessment. Here, two platforms in particular define the market. The first is HireVue, the leader in AI-powered video interviewing. In this system, candidates do not speak to a human.

Instead, they record their answers to a series of preset questions on camera. HireVue’s AI then analyses these recordings, using Natural Language Processing (NLP) to evaluate the content of the candidate’s answers, while also analysing verbal cues (like tone, tempo, and clarity) to infer confidence.

Most controversially, some systems claim to analyse non-verbal cues, such as facial expressions and body language, to generate “personality insights” based on psychological models like the “Big Five.” The AI then generates a “suitability” score, ranking the candidate for human review. The second platform is Pymetrics, which bypasses the resume entirely.

It operates on the premise that resumes are poor, biased proxies for skill. Instead, candidates play a series of neuroscience-based “games” designed to assess cognitive and emotional strengths like risk aversion, focus, and memory. This “soft skills” profile is then compared to a “golden” profile, which the AI has built by assessing the company’s own top-performing employees.

Finally, after the games and video auditions, matching and ranking platforms like Klearskill and X0PA AI use AI-powered CV analysis and predictive analytics to “rank” all remaining candidates, producing a final shortlist of those deemed the “best” for the role.

Such systems, however, conceal a set of deep and legally perilous flaws. The root of the problem is the “black box” nature of these systems. Recruiters can see the input (a resume, a video) and the output (a score, a “no” decision), but the internal decision-making process, the “why” behind the AI’s judgment, is a complete mystery. These deep-learning models are often so complex that even their own creators cannot fully explain how they arrived at a specific outcome.

The absence of clarity creates a catastrophic chain of third-order effects. The black box leads directly to a lack of transparency. Candidates who are rejected have no way of knowing why, undermining trust and creating perceptions of unfairness. This lack of transparency makes “explainability,” the ability to articulate the reasoning for a decision, impossible.

Such opacity is a significant legal challenge, as regulations like the European Union’s (EU) General Data Protection Regulation (GDPR) include provisions related to automated decision-making. Without transparency or explainability, there can be no accountability. When an AI system makes a mistake or a biased decision, it is nearly impossible to identify who is at fault.

In turn, this environment is the perfect breeding ground for algorithmic bias, which often operates by “laundering” existing human biases at an industrial scale. The artificial intelligence models are trained on historical data, and if that data reflects past discriminatory hiring practices, the AI learns these biases as successful patterns. The AI then applies this bias with perfect, ruthless consistency.

And it has been proven repeatedly. In 2018, Amazon was forced to scrap an internal AI recruiting tool after discovering it had taught itself to be biased against women. Because the system was trained on 10 years of company resumes, which were dominated by men, it learnt to penalise any resume that contained the word “women’s,” such as “captain of the women’s chess club.”

A devastating study from the University of Washington in October 2024 tested three state-of-the-art Large Language Models (LLMs) from major companies (Mistral AI, Salesforce, and Contextual AI) on their ability to rank resumes. The study found “significant racial, gender, and intersectional bias.”

The results were stark: the systems preferred resumes with white-associated names 85% of the time, compared to 9% for black-associated names. The bias was also deeply intersectional. The systems never preferred names perceived as black male to names perceived as white male, revealing what researchers called a “really unique harm against black men” that was not visible when only looking at race or gender in isolation.

Beyond bias, the very premise of some of these tools rests on a foundation of pseudoscience. The practice of “emotion AI,” or using facial expressions and tone to assess personality and “cultural fit,” is warned by many experts and peer-reviewed journals to be scientifically “unjustified.”

The European Commission, in its draft “EU AI Act,” has identified such practices in recruiting as posing an “unacceptable risk,” effectively labelling them a form of high-tech phrenology that violates human rights.

A data-hungry infrastructure also creates a massive privacy crisis. These tools require “vast amounts of personal data” to function. That information is often collected without clear or informed consent, or “scraped” from public-facing social media profiles.

The danger is not just the collection of data, but its inference. Artificial intelligence systems can analyse seemingly innocuous data points to infer highly sensitive, protected characteristics. For example, a 2024 legal case alleges that a company’s AI-powered personality tests were designed to screen out candidates with mental health disorders, such as anxiety and depression. This creates a new, undetectable vector for discrimination and opens employers to massive legal liability under privacy laws like the GDPR and Illinois’ Biometric Information Privacy Act (BIPA).

Thriving in the human-AI hybrid

The simplistic narrative of a “robot apocalypse” is being decisively refuted by macroeconomic data. The future of work is not one of joblessness, but of profound job transformation. The key findings from major economic bodies converge on a neutral-to-positive outlook for net employment.

The World Economic Forum’s (WEF) Future of Jobs Report 2025 projects that while 92 million roles will be displaced by 2030, a staggering 170 million new jobs will be created by macro trends like technology and the green transition.

This results in a net employment increase of 78 million jobs. This conclusion is mirrored by Gartner, which predicts AI’s impact on global jobs will remain “neutral” through 2026 and that by 2028, AI will create more jobs than it destroys.

Academic research from the Brookings Institution further supports this, finding that, contrary to common fears, AI adoption at the firm level is actually associated with firm growth and an increase in the workforce, not a reduction.

The real threat, therefore, is not mass unemployment but mass obsolescence. The work itself is being fundamentally re-architected. McKinsey analysis projects that up to 30% of current hours worked could be automated by 2030, a trend accelerated by Generative AI.

The change will necessitate “massive occupational transitions,” with an estimated 12 million workers in the United States alone needing to change jobs. Gartner describes the process as a “workforce transformation.”

Artificial intelligence will make some skills, such as summarisation and information retrieval, far less important, while creating an urgent need for “entirely new skills.”

The shift is different from past technological waves. For decades, technology was “skill-biased” against routine manual labour.

Generative AI, however, upends this paradigm by becoming adept at automating routine cognitive and creative tasks, disrupting white-collar work in an unprecedented way.

The disruption is causing a fundamental economic shift in what constitutes a “valuable” skill. As AI becomes commoditised and drives the cost of technical answers and routine analysis toward zero, economic value migrates to the human abilities that AI cannot replicate. These are the skills of judgment, creativity, and social intelligence.

A critical academic framework for understanding this shift comes from Harvard Business School. Research by Letian Zhang on “Nested Human Capital” argues that skills are “nested” in a cumulative and sequential structure.

At the foundation of this structure are the fundamental “soft” skills: communication, critical thinking, reading comprehension, and teamwork. More advanced, specific technical skills (like coding in a certain language or running a specific analysis) are “nested” on top of this foundation.

They cannot be built or sustained without it. The research’s most startling finding was that nearly 80% of the wage premium commanded by a specific technical skill was dependent on the employee’s mastery of those underlying foundational “soft” skills.

The implication for employers is stark. In an age of rapid AI disruption, “upskilling” with only technical skills is a failed strategy. To remain competitive, organisations must first invest in the fundamental human skills that serve as the foundation for all ongoing learning.

This is not just an academic theory. It is reflected in market demand. McKinsey’s 2030 labour model shows that as demand for physical and manual skills stabilises, demand for two categories will rise in tandem: technological skills and social and emotional skills.

Surveyed executives reinforce this, reporting that their most significant skill shortages are not just in data analytics, but in critical thinking, creativity, and the ability to teach and train others. The WEF’s 2025 report on the top 10 fastest-growing skills needed by 2030 confirms this dual-track future.

The list is a perfect blend of the technical and the human. On one hand, it includes “AI and big data,” “networks and cybersecurity,” and “technological literacy.” On the other hand, it is dominated by “creative thinking and resilience,” “flexibility and agility,” “curiosity and lifelong learning,” and “leadership and social influence.”

The future of work, therefore, is not one of humans versus AI, but humans with AI. The most valuable worker will be the one who can successfully operate in this new hybrid model. As Harvard Business School professor Karim Lakhani puts it, “AI won’t replace humans—but humans with AI will replace humans without AI.” This is the core of the “AI-First Leadership” mindset.

However, this human-AI collaboration is far more nuanced than simply “adding AI” to a workflow. A landmark meta-analysis of 370 results by researchers at MIT Sloan produced a surprising and critical finding. For decision-making tasks (like forecasting demand or diagnosing medical issues), the human-AI combination often performed worse than the best of either alone.

The researchers hypothesised that this is because humans are poor judges of when to trust an algorithm. They either blindly accept flawed artificial intelligence suggestions or, conversely, override correct AI suggestions with their own flawed intuition.

However, the same study found that the human-AI combination showed “promising synergy” and performed best in creative tasks. This suggests a new, more effective model for collaboration. The workflow must be redesigned, not just augmented.

Artificial intelligence systems should be leveraged for the subtasks they excel at, such as those that are repetitive, high-volume, and data-driven. Humans, in turn, must focus on the subtasks they excel at, including those requiring contextual understanding, complex social strategy, and emotional intelligence. This new model of partnership, dubbed “Superagency” by some researchers, is the key to unlocking the estimated $4.4 trillion in productivity that generative AI promises. It is not about automation, but about amplifying human agency.

Related posts

Trump’s $2 trillion deals recast US-Gulf relations

GBO Correspondent

Are luxurious tiny homes the future?

GBO Correspondent

Qatar’s future iconic destination

GBO Correspondent