• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

NGOs.AI

AI in Action

  • Home
  • AI for NGOs
  • Case Stories
  • AI Project Ideas for NGOs
  • Contact
You are here: Home / Category / How Artificial Intelligence Influences Gender Discrimination

How Artificial Intelligence Influences Gender Discrimination

Dated: March 6, 2026

Artificial intelligence and discrimination have been widely discussed in recent years, yet incidents involving bias in AI systems continue to emerge. These biases can relate to race, age, gender, ethnicity, religion, nationality, disability, culture, socio-economic status, and geographical location. Rather than presenting a scientific analysis, the discussion focuses on reflecting on the responsibilities of AI systems within a human rights framework. The reflections are based on findings from various studies and articles examining the relationship between AI technologies and social bias.

One example of AI bias comes from a 2023 study in the United States that examined how large language models generate job recommendation letters. Researchers asked two AI models to create reference letters for male and female candidates. The results revealed clear gender bias in the language used. Letters written for men often included terms associated with leadership, expertise, and professionalism, while letters written for women focused more on personality traits, appearance, or emotional characteristics. This contrast demonstrates how existing gender stereotypes can be reflected and reinforced by AI systems.

Another example can be seen in the healthcare sector, where some AI models rely on datasets that represent only limited populations. In many cases, health data primarily reflects certain regions or demographic groups, excluding communities from other parts of the world. Additionally, the lack of diversity among AI researchers and developers can lead to biased data collection and analysis. When individuals from marginalized or socioeconomically diverse backgrounds are underrepresented in research and development teams, the resulting AI systems may fail to account for a wide range of perspectives and needs.

The increasing use of AI in professional and everyday contexts raises concerns that these technologies could reinforce or amplify existing forms of discrimination if they are applied without critical analysis. AI systems depend on programming code and datasets to function effectively, and the reliability of their outcomes depends heavily on the quality of these inputs. Ethical considerations, diversity, and inclusion are therefore essential components of responsible AI development.

Another factor influencing AI bias is the limited representation of women and gender-diverse individuals in technical roles such as data science, engineering, and machine learning. When development teams lack diversity, the perspectives shaping algorithms and datasets may be narrow, increasing the likelihood of biased outcomes. Building diverse teams with varied experiences and viewpoints is therefore crucial to ensuring that AI systems are designed in a more inclusive and balanced way.

Monitoring and documenting AI-related incidents is an important step toward identifying patterns of bias and developing strategies to address them. Databases that track these incidents allow researchers and policymakers to assess the risks and harms associated with AI systems. Such documentation can inform public policy decisions and guide the design of future technologies to minimize discrimination and social harm.

Continuous monitoring throughout the entire lifecycle of AI systems—from design and development to deployment—is necessary to ensure that diversity, inclusion, and human rights principles are consistently integrated. As AI technologies become more embedded in society, they present new challenges in the field of human rights, requiring careful oversight and responsible innovation to prevent unintended social consequences.

Related Posts

  • AI for Gender Equality: Addressing Disparities and Biases in Development
  • Smart Home Technologies for Family Safety and Convenience
  • AI-Powered Early Warning Systems for Natural Disasters
  • Photo Satellite imagery
    AI-Powered Early Warning Systems for Natural Disasters
  • A Project on "Building AI Systems to Fight Racial and Gender Discrimination”

Primary Sidebar

Minister Launches Public Sector AI Programme with Japan and UNDP

Canada Boosts AI with New Supercomputing Plan

AI Tutoring Tools for Disadvantaged Students: EdTech Call

Human Dignity in the Age of AI: Rethinking Data with a Pulse

ILO Brief: Generative AI Set to Reshape Millions of Jobs in Vietnam

Rockefeller Foundation Commits $10M to Drive AI Innovation for Crisis-Affected Communities

Canada and Finland Strengthen AI Partnership with Sovereign Technology Cooperation

Google.org Commits $15M to AI Research Through Digital Futures Fund Expansion

UN Begins Global AI Impact Study Focused on People

Canada to Use AI Hybrid Model for Severe Weather Forecasts

MYOB, Microsoft Join Forces for Five-Year AI Initiative

Natter Raises $23M to Enhance AI Insights for Enterprises

UNDP–Intel Partnership Boosts AI Skills in Lesotho and Liberia

UNDP and Intel Partner to Boost AI Capacity in Lesotho and Liberia

PacifiCan Invests $13.8M in AI and Aerospace Innovation in BC

Tajikistan Uses AI to Improve Water Management

AI-Powered Crisis Response: IOM and Google Cloud Join Forces

India’s Data Protection and AI Governance Update

AI Chatbot Sami Launches in Colombia for Migrants

CFPs: Evaluating Scalability and Impact of GenAI and Agentic AI in the Water and Wastewater Sector

AI for Good Fund: Building AI Capacity in the Nonprofit Sector (Ireland)

Submissions open for BuildAI Pitch Event (India)

Microsoft launches AI initiative to empower nonprofits worldwide

Bezos Earth Fund Backs AI Climate Fix as Amazon’s Emissions Rise

AI App Helps Bridge Information Gap for India’s Farmers

© NGOs.AI. All rights reserved.

Grants Management And Research Pte. Ltd., 21 Merchant Road #04-01 Singapore 058267

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}