Explainable AI and Algorithmic Diplomacy: The EL-MDS Framework for Transparent and Equitable Multilateral Decision-Making
Abstract
The escalating integration of Artificial Intelligence into high-stakes global governance contexts, from
refugee allocation to international trade agreements, engenders profound opacity, eroding stakeholder
trust and perpetuating epistemic inequalities and data colonialism, particularly impacting the Global
South. This study introduces the Explainability Layers and Legibility of Multilateral Decision Systems
(EL-MDS) framework, a novel socio-technical architecture designed to enhance transparency,
accountability, and legitimacy in AI-driven multilateral decision-making. Employing a convergent
parallel mixed-methods design, which integrates quantitative simulation experiments with qualitative
analysis of secondary data and expert insights, the research evaluates the impact of EL-MDS on
stakeholder trust and policy utility. Results reveal EL-MDS significantly improves perceived legitimacy
and practical applicability of AI outputs, demonstrating its capacity to sustain higher trust levels even
at elevated transparency. Crucially, ethical AI practices enhance trust only when mediated by robust
organizational capabilities. EL-MDS offers a transferable governance blueprint for the ethical
integration of AI and operationalizes "algorithmic diplomacy," fostering equitable global AI policy and
combating the Digital Cantillon Effect.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ahmed Jajere

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.