Explainable AI and Algorithmic Diplomacy: The EL-MDS Framework for Transparent and Equitable Multilateral Decision-Making

Authors

  • Ahmed Jajere

Abstract

The escalating integration of Artificial Intelligence into high-stakes global governance contexts, from
refugee allocation to international trade agreements, engenders profound opacity, eroding stakeholder
trust and perpetuating epistemic inequalities and data colonialism, particularly impacting the Global
South. This study introduces the Explainability Layers and Legibility of Multilateral Decision Systems
(EL-MDS) framework, a novel socio-technical architecture designed to enhance transparency,
accountability, and legitimacy in AI-driven multilateral decision-making. Employing a convergent
parallel mixed-methods design, which integrates quantitative simulation experiments with qualitative
analysis of secondary data and expert insights, the research evaluates the impact of EL-MDS on
stakeholder trust and policy utility. Results reveal EL-MDS significantly improves perceived legitimacy
and practical applicability of AI outputs, demonstrating its capacity to sustain higher trust levels even
at elevated transparency. Crucially, ethical AI practices enhance trust only when mediated by robust
organizational capabilities. EL-MDS offers a transferable governance blueprint for the ethical
integration of AI and operationalizes "algorithmic diplomacy," fostering equitable global AI policy and
combating the Digital Cantillon Effect.

Downloads

Published

2026-01-13

How to Cite

Jajere, A. . (2026). Explainable AI and Algorithmic Diplomacy: The EL-MDS Framework for Transparent and Equitable Multilateral Decision-Making. Global Journal of Business and Integral Security, 8(2). Retrieved from http://gbis.ch/index.php/gbis/article/view/936

Issue

Section

Articles