• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

NGOs.AI

AI in Action

  • Home
  • AI for NGOs
  • Case Stories
  • AI Project Ideas for NGOs
  • Contact
You are here: Home / Category / How Artificial Intelligence Influences Gender Discrimination

How Artificial Intelligence Influences Gender Discrimination

Dated: March 6, 2026

Artificial intelligence and discrimination have been widely discussed in recent years, yet incidents involving bias in AI systems continue to emerge. These biases can relate to race, age, gender, ethnicity, religion, nationality, disability, culture, socio-economic status, and geographical location. Rather than presenting a scientific analysis, the discussion focuses on reflecting on the responsibilities of AI systems within a human rights framework. The reflections are based on findings from various studies and articles examining the relationship between AI technologies and social bias.

One example of AI bias comes from a 2023 study in the United States that examined how large language models generate job recommendation letters. Researchers asked two AI models to create reference letters for male and female candidates. The results revealed clear gender bias in the language used. Letters written for men often included terms associated with leadership, expertise, and professionalism, while letters written for women focused more on personality traits, appearance, or emotional characteristics. This contrast demonstrates how existing gender stereotypes can be reflected and reinforced by AI systems.

Another example can be seen in the healthcare sector, where some AI models rely on datasets that represent only limited populations. In many cases, health data primarily reflects certain regions or demographic groups, excluding communities from other parts of the world. Additionally, the lack of diversity among AI researchers and developers can lead to biased data collection and analysis. When individuals from marginalized or socioeconomically diverse backgrounds are underrepresented in research and development teams, the resulting AI systems may fail to account for a wide range of perspectives and needs.

The increasing use of AI in professional and everyday contexts raises concerns that these technologies could reinforce or amplify existing forms of discrimination if they are applied without critical analysis. AI systems depend on programming code and datasets to function effectively, and the reliability of their outcomes depends heavily on the quality of these inputs. Ethical considerations, diversity, and inclusion are therefore essential components of responsible AI development.

Another factor influencing AI bias is the limited representation of women and gender-diverse individuals in technical roles such as data science, engineering, and machine learning. When development teams lack diversity, the perspectives shaping algorithms and datasets may be narrow, increasing the likelihood of biased outcomes. Building diverse teams with varied experiences and viewpoints is therefore crucial to ensuring that AI systems are designed in a more inclusive and balanced way.

Monitoring and documenting AI-related incidents is an important step toward identifying patterns of bias and developing strategies to address them. Databases that track these incidents allow researchers and policymakers to assess the risks and harms associated with AI systems. Such documentation can inform public policy decisions and guide the design of future technologies to minimize discrimination and social harm.

Continuous monitoring throughout the entire lifecycle of AI systems—from design and development to deployment—is necessary to ensure that diversity, inclusion, and human rights principles are consistently integrated. As AI technologies become more embedded in society, they present new challenges in the field of human rights, requiring careful oversight and responsible innovation to prevent unintended social consequences.

Related Posts

  • AI for Gender Equality: Addressing Disparities and Biases in Development
  • Smart Home Technologies for Family Safety and Convenience
  • AI-Powered Early Warning Systems for Natural Disasters
  • Photo Satellite imagery
    AI-Powered Early Warning Systems for Natural Disasters
  • A Project on "Building AI Systems to Fight Racial and Gender Discrimination”

Primary Sidebar

Greece Introduces AI Enforcement and Social Media Age Restrictions in Digital Push

UNESCO and Oxford Launch Global AI Course for Courts

UN Report Flags Growing Threats from AI and Digital Surveillance

UK Backs Company Developing AI That Can Discover New Knowledge

How Create4Design and AI Are Shaping Kyrgyzstan’s Creative Future

AI, Design Choices and Human Development Panel Discussion

AIReady.ie Launched to Upskill One Million People in AI

DBS Enhances Spark GenAI for Faster SME AI Growth

Google Cloud and CVC Launch Agentic AI Partnership

Preventing Women from Being Left Behind in AI

Geoffrey Hinton Urges Strong AI Regulation as Global Risks Grow

UK Invites AI Companies to Strengthen Cyber Defence Strategy

EU Opens Funding Call for AI in Medical Imaging Pilots

WHO Creates AI Community of Practice for Disaster Response in EMR

AI on the Role of Artificial Intelligence in Food Banking

World’s First AI Agent Governance Framework Launched by MetaComp

€63.2M EU Funding Boost for AI Innovation in Health & Safety

Seizing Australia’s AI Future: A Growth Blueprint

Kyrgyzstan Embraces AI for People Initiative for Progress

New ILO Report Explains AI Impact on Jobs

Minister Launches Public Sector AI Programme with Japan and UNDP

Canada Boosts AI with New Supercomputing Plan

AI Tutoring Tools for Disadvantaged Students: EdTech Call

Human Dignity in the Age of AI: Rethinking Data with a Pulse

ILO Brief: Generative AI Set to Reshape Millions of Jobs in Vietnam

© NGOs.AI. All rights reserved.

Grants Management And Research Pte. Ltd., 21 Merchant Road #04-01 Singapore 058267

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}