New research from the University of East Anglia has raised concerns about the growing use of AI-generated imagery in NGO communications, suggesting that it may unintentionally undermine the very causes organisations are trying to promote. The study examined 171 AI-generated images used by 17 major organisations, including Amnesty International, Plan International, and WWF, along with more than 400 public comments those images received. It found that fewer than one in five comments actually engaged with the humanitarian issue being highlighted, while most responses focused instead on whether the images were real or on flaws in their technical quality, causing the core message to be overshadowed.
The research argues that trust, which is central to the NGO sector, may be at risk when organisations rely on AI-generated visuals. Although such imagery is fast, flexible, and increasingly affordable, it can quietly erode public confidence. Even transparency did not fully solve the issue. Despite 85% of the images in the study being clearly labelled as AI-generated, audiences still reacted with scepticism, often scrutinising the images for inaccuracies and questioning the ethics behind their creation rather than responding to the underlying cause.
Another major concern highlighted is the mismatch that can occur between an organisation’s values and the tools it uses to communicate. WWF Denmark, for example, faced backlash after using energy-intensive AI tools in a sustainability campaign, with supporters arguing that the method contradicted the organisation’s environmental mission. This kind of “message-medium misalignment” can damage credibility, especially when AI-generated visuals are seen as inconsistent with an NGO’s ethical, social, or environmental commitments. Critics have also pointed out that such tools may threaten the livelihoods of local photographers and filmmakers, while AI-generated films in particular can have significantly higher energy demands than still images.
At the same time, the research acknowledges that AI-generated imagery can be useful in certain ethical contexts, particularly when working with survivors of conflict, abuse, or displacement, where traditional photography or filming may risk harm, retraumatisation, or privacy violations. In such cases, synthetic visuals may offer a safer alternative. However, even here the study notes a tension, as some donors and audiences still tend to value “authentic” imagery more highly than participant privacy, making it important for organisations to carefully consider how such choices may affect supporter trust and emotional connection.
Rather than calling for a ban on AI in NGO communications, the study encourages more thoughtful and responsible use. It recommends that organisations develop clear policies outlining when AI-generated imagery is appropriate, how it should be reviewed, and how it must be disclosed. It also stresses the need to train communications teams so they understand the ethical implications of choices around representation, including skin tone, clothing, cultural markers, and setting, all of which shape how communities are perceived.
The findings further suggest that NGOs should avoid highly photorealistic AI visuals, as these tend to attract the most scrutiny and backlash. Instead, more stylised, illustrative, or clearly non-photographic visuals may be better received by audiences. The study also emphasises the importance of involving the communities being represented in the creative process, allowing them to help shape prompts, review outputs, and approve final images so that the resulting visuals reflect lived realities rather than assumptions from outside.
Finally, the research urges NGOs to move beyond narrow and repetitive charity tropes often reproduced by AI, such as poverty, crisis, and vulnerable children, and instead tell broader, more nuanced stories that reflect the diversity, resilience, and complexity of the communities they serve. At a time when public trust in institutions is already fragile and audiences are increasingly quick to identify synthetic content, the study concludes that AI is not inherently harmful to humanitarian storytelling. However, using it as a shortcut to emotional engagement carries significant reputational risks that organisations can no longer afford to ignore.






