A new ILO working paper studying the use of artificial intelligence in human resource management reveals that many AI tools used by organizations are based on unclear objectives, incomplete or biased data, and opaque programming processes. According to the authors, these weaknesses can distort HR decision-making, reinforce existing inequalities, and create significant legal and ethical risks for employers.
The analysis highlights a structural challenge within HR management: the long-standing belief that quantification leads to objectivity. As AI becomes more common in areas such as hiring, pay decisions, scheduling, and performance assessments, the paper warns that overreliance on data-driven systems without proper safeguards can result in the uncritical deployment of technologies that are not well suited for managing people. The concern grows when organizations lack a full understanding of the limitations of these tools.
The paper stresses the need for a human-centred approach to AI adoption. It argues that the effectiveness of AI in HR depends heavily on the quality of data and clarity of goals guiding these systems. Without careful oversight, AI may unintentionally undermine fairness, transparency, and trust in the workplace.
In response, the publication proposes a framework for evaluating AI in HR and emphasizes the importance of stronger worker involvement, clearer governance structures, and more transparent design and implementation practices. It highlights the role of social dialogue in ensuring that AI technologies support decent work and protect fundamental labour rights.
Overall, the findings add to the ILO’s broader work on digital transformation, offering useful guidance for governments, employers, and workers aiming to embrace technological innovation while maintaining responsible and fair labour standards.






