Explicable Artificial Intelligence in Decision-Critical Systems: A Computational and Ethical Perspective
Keywords:
Explainable AI, Artificial Intelligence, Machine Learning, Transparency, Ethical AI, Decision Support SystemsAbstract
AI now is an inseparable part of the contemporary computer systems and affects the process of decision-making in many areas of life, including healthcare, finance, transportation, education, and governance. Despite the spectacular progress in predictive accuracy and automation, still, most modern AI systems, especially deep learning systems are opaque, or black-box, in their nature, and provide partial access to the process of decision-making. This privacy is very difficult to deal with technically, ethically and even as a societal concern mostly in applications that require accountability, fairness and trust as major concerns. The research paper explores the concept of Explainable Artificial Intelligence through the computer science lens including its theoretical basis, computation principles, and application. This paper discusses the necessity of explainability, how it can be performed in a computationally sound manner, and what are the drawbacks of existing XAI methods. This paper supports explainability as a key technical imperative to the sustainable application of AI systems by synthesizing algorithmic approaches and theoretical insights with the assistance of academic literature.
Downloads
References
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Stanzaleaf International Journal of Multidisciplinary Studies

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.