Humanitarian Alternatives is preparing its 32nd issue, scheduled for July 2026, and is issuing a call for papers for the edition’s provisional focus, “Artificial intelligence: uses, tensions and issues.” Researchers, practitioners, and observers in the fields of humanitarian action and international solidarity are invited to submit a summary of their proposed article, a provisional structure, and a short biography of the author(s) by 30 January 2026. Selected contributors will be asked to submit full articles in French or English by 26 May 2026, with an expected length of approximately 15,000 characters, or roughly 2,400 words in French and 2,200 words in English. Seven to nine articles will be chosen to form the focus section of this issue, while additional submissions on other themes in humanitarian action and solidarity are welcome for publication in other sections of the review.
Artificial intelligence (AI) is now an integral part of the humanitarian sector, influencing both office operations and field practices. Its rapid adoption has outpaced our collective understanding, raising the need for critical examination. While AI offers significant efficiency gains, it also transforms operational contexts, decision-making processes, and the foundational relationships between aid organizations and the people they serve. The focus of this edition is therefore to critically assess this ongoing transformation, without either embracing uncritical technophilia or resisting change without reason.
AI is reshaping operational contexts, particularly in conflict and disaster scenarios. Its use in modern warfare, such as targeting systems and autonomous drones, raises pressing questions about the applicability of international humanitarian law and the protection of civilians. Similarly, in disaster response, AI offers predictive tools for mapping damage and analyzing data in real time, but reliance on these models carries risks of hasty decisions, bias, and marginalization of local knowledge. Papers are sought that explore how AI is altering the rules of engagement, humanitarian access, and responsibilities in these complex environments, drawing on recent experiences and lessons learned.
Beyond contexts, AI is significantly affecting humanitarian jobs, professional hierarchies, and skill sets. Automation can free workers from repetitive tasks, yet it also redefines the role of human assessors, fundraisers, and logisticians. Questions arise about the redistribution of skills, the ethical interpretation of algorithmic outputs, and the potential for technological advantages to favor organizations already proficient in AI tools. Submissions should analyze how organizations manage these transitions, the skills that are emerging or becoming obsolete, and the training or policies needed to support the workforce in this evolving landscape.
Interdisciplinary challenges posed by AI extend to perceptions of reality, decision-making, inequality, and resilience. AI can produce convincing simulations of reality, affecting trust and credibility in humanitarian work. It may also influence moral and ethical decisions, yet cannot bear responsibility for them, making safeguards and accountability essential. Furthermore, AI could exacerbate inequalities between organizations, regions, and languages. Contributors are invited to examine these risks and explore strategies for inclusive, resilient, and ethically grounded AI use, ensuring that humanitarian expertise and sovereignty are preserved.
Humanitarian Alternatives encourages diverse approaches in submissions, combining analytical rigor with grounded experience. Cross-disciplinary perspectives integrating international law, ethics, political science, organizational sociology, and technology studies are particularly welcome. Papers documenting real-life experiments with AI, including successes, failures, and ethical dilemmas, are highly valued. Analyses of regulatory frameworks, governance gaps, and future-oriented reflections on AI in humanitarian work are also encouraged, provided they are informed by practical knowledge and avoid hasty generalizations. The goal is to foster a nuanced understanding of AI’s impact on the sector, emphasizing lessons learned, limitations, and opportunities for responsible use.



