Artificial intelligence and discrimination have been widely discussed in recent years, yet incidents involving bias in AI systems continue to emerge. These biases can relate to race, age, gender, ethnicity, religion, nationality, disability, culture, socio-economic status, and geographical location. Rather than presenting a scientific analysis, the discussion focuses on reflecting on the responsibilities of AI systems within a human rights framework. The reflections are based on findings from various studies and articles examining the relationship between AI technologies and social bias.
One example of AI bias comes from a 2023 study in the United States that examined how large language models generate job recommendation letters. Researchers asked two AI models to create reference letters for male and female candidates. The results revealed clear gender bias in the language used. Letters written for men often included terms associated with leadership, expertise, and professionalism, while letters written for women focused more on personality traits, appearance, or emotional characteristics. This contrast demonstrates how existing gender stereotypes can be reflected and reinforced by AI systems.
Another example can be seen in the healthcare sector, where some AI models rely on datasets that represent only limited populations. In many cases, health data primarily reflects certain regions or demographic groups, excluding communities from other parts of the world. Additionally, the lack of diversity among AI researchers and developers can lead to biased data collection and analysis. When individuals from marginalized or socioeconomically diverse backgrounds are underrepresented in research and development teams, the resulting AI systems may fail to account for a wide range of perspectives and needs.
The increasing use of AI in professional and everyday contexts raises concerns that these technologies could reinforce or amplify existing forms of discrimination if they are applied without critical analysis. AI systems depend on programming code and datasets to function effectively, and the reliability of their outcomes depends heavily on the quality of these inputs. Ethical considerations, diversity, and inclusion are therefore essential components of responsible AI development.
Another factor influencing AI bias is the limited representation of women and gender-diverse individuals in technical roles such as data science, engineering, and machine learning. When development teams lack diversity, the perspectives shaping algorithms and datasets may be narrow, increasing the likelihood of biased outcomes. Building diverse teams with varied experiences and viewpoints is therefore crucial to ensuring that AI systems are designed in a more inclusive and balanced way.
Monitoring and documenting AI-related incidents is an important step toward identifying patterns of bias and developing strategies to address them. Databases that track these incidents allow researchers and policymakers to assess the risks and harms associated with AI systems. Such documentation can inform public policy decisions and guide the design of future technologies to minimize discrimination and social harm.
Continuous monitoring throughout the entire lifecycle of AI systems—from design and development to deployment—is necessary to ensure that diversity, inclusion, and human rights principles are consistently integrated. As AI technologies become more embedded in society, they present new challenges in the field of human rights, requiring careful oversight and responsible innovation to prevent unintended social consequences.




