• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

NGOs.AI

AI in Action

  • Home
  • AI for NGOs
  • Case Stories
  • AI Project Ideas for NGOs
  • Contact
You are here: Home / Category / How Artificial Intelligence Influences Gender Discrimination

How Artificial Intelligence Influences Gender Discrimination

Dated: March 6, 2026

Artificial intelligence and discrimination have been widely discussed in recent years, yet incidents involving bias in AI systems continue to emerge. These biases can relate to race, age, gender, ethnicity, religion, nationality, disability, culture, socio-economic status, and geographical location. Rather than presenting a scientific analysis, the discussion focuses on reflecting on the responsibilities of AI systems within a human rights framework. The reflections are based on findings from various studies and articles examining the relationship between AI technologies and social bias.

One example of AI bias comes from a 2023 study in the United States that examined how large language models generate job recommendation letters. Researchers asked two AI models to create reference letters for male and female candidates. The results revealed clear gender bias in the language used. Letters written for men often included terms associated with leadership, expertise, and professionalism, while letters written for women focused more on personality traits, appearance, or emotional characteristics. This contrast demonstrates how existing gender stereotypes can be reflected and reinforced by AI systems.

Another example can be seen in the healthcare sector, where some AI models rely on datasets that represent only limited populations. In many cases, health data primarily reflects certain regions or demographic groups, excluding communities from other parts of the world. Additionally, the lack of diversity among AI researchers and developers can lead to biased data collection and analysis. When individuals from marginalized or socioeconomically diverse backgrounds are underrepresented in research and development teams, the resulting AI systems may fail to account for a wide range of perspectives and needs.

The increasing use of AI in professional and everyday contexts raises concerns that these technologies could reinforce or amplify existing forms of discrimination if they are applied without critical analysis. AI systems depend on programming code and datasets to function effectively, and the reliability of their outcomes depends heavily on the quality of these inputs. Ethical considerations, diversity, and inclusion are therefore essential components of responsible AI development.

Another factor influencing AI bias is the limited representation of women and gender-diverse individuals in technical roles such as data science, engineering, and machine learning. When development teams lack diversity, the perspectives shaping algorithms and datasets may be narrow, increasing the likelihood of biased outcomes. Building diverse teams with varied experiences and viewpoints is therefore crucial to ensuring that AI systems are designed in a more inclusive and balanced way.

Monitoring and documenting AI-related incidents is an important step toward identifying patterns of bias and developing strategies to address them. Databases that track these incidents allow researchers and policymakers to assess the risks and harms associated with AI systems. Such documentation can inform public policy decisions and guide the design of future technologies to minimize discrimination and social harm.

Continuous monitoring throughout the entire lifecycle of AI systems—from design and development to deployment—is necessary to ensure that diversity, inclusion, and human rights principles are consistently integrated. As AI technologies become more embedded in society, they present new challenges in the field of human rights, requiring careful oversight and responsible innovation to prevent unintended social consequences.

Related Posts

  • AI for Gender Equality: Addressing Disparities and Biases in Development
  • Smart Home Technologies for Family Safety and Convenience
  • AI-Powered Early Warning Systems for Natural Disasters
  • Photo Satellite imagery
    AI-Powered Early Warning Systems for Natural Disasters
  • A Project on "Building AI Systems to Fight Racial and Gender Discrimination”

Primary Sidebar

How AI Helped Save a Common Kestrel in Wildlife Rescue

How Artificial Intelligence Influences Gender Discrimination

UNESCO and MeitY Release India AI Readiness Report 2026

Huawei Unveils AI Solutions to Transform the Financial Sector

Applications open for AI Reporting Grants Program

Google.org Impact Challenge: AI for Government Innovation

How AI is Transforming Workplaces and Employee Conditions

MWC26: GSMA Foundry and ESA Announce €100M Tech Funding Boost

MWC26 Barcelona Calls for 5G Expansion, AI Leadership, and Digital Safety

How Kenya is Leading the Charge in Governance-Led AI Transformation

Landmark Consultation on Child Protection in Social Media, Gaming, and AI

AI Kid of India: Inspiring the Next Generation to Embrace Technology

AI in Scientific Publishing: Opportunity or Threat?

AI Evaluation in Action: Lessons from Real-World Implementers

How Artificial Intelligence is Shaping Samoa’s Future

AI 10 Billion Initiative Launched by AfDB and UNDP at Nairobi 2026 Forum

World Radio Day 2026 in Pakistan: AI Enhances Educational Broadcasting

EVAH Launch: Generating Data and Insights for AI in Health

Gates, Wellcome, and Novo Nordisk Launch $60M Initiative to Evaluate AI in Health in LMICs

UN Agencies Explore Scaling AI for Development at India AI Impact Summit 2026

OpenAI and Microsoft Join UK Coalition to Advance Safe AI Development

Government Publishes Digital & AI Strategy to Strengthen Ireland as AI and Innovation Hub

Artists’ Earnings Plummet as AI Disrupts Creative Industries, UNESCO Finds

Grain ATMs and AI Hunger Maps Highlighted at UN Agency Showcase in India

MHRA Backs Growth in Brain and AI Technology as UK Medical Device Testing Hits Record High

© NGOs.AI. All rights reserved.

Grants Management And Research Pte. Ltd., 21 Merchant Road #04-01 Singapore 058267

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}