Unveiling the Black Box: Explainable AI and the Diffusion of Machine Learning in Algorithmic Trading
Main Article Content
Abstract
Machine learning (ML) is increasingly integrated into algorithmic trading, offering significant predictive accuracy and real-time adaptability. However, widespread adoption is constrained by critical barriers including model complexity, the “black box” problem of transparency, and regulatory concerns. This paper investigates how emerging paradigms such as Explainable AI (XAI) can resolve these challenges and shape future adoption trends. Employing a qualitative case study methodology guided by the Diffusion of Innovations (DOI) theory, this research analyses the diffusion of these technologies through the dimensions of relative advantage, compatibility, complexity, trialability, and observability. Findings reveal that while existing literature recognizes ML’s capacity to enhance decision-making accuracy, limited studies analyse whether transparency-driven ML structures can accelerate systemic adoption. This study presents a framework connecting advanced AI capabilities with the practical requirements of user trust and regulatory compliance. The results highlight the pivotal role of XAI not only in mitigating the “black-box” barrier but also in aligning AI-driven trading strategies with risk management and regulations.