Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing

Authors

  • Bakkiyaraj Kanthimathi Malamuthu Vice President, Department of Cyber security, Risk and Resilience , Morgan Stanley Services Inc, 1633 Broadway, Newyork, NY 10019
  • Thripthi P Balakrishnan Assistant Professor Department of Computer Science & Engineering , Madanapalle Institute of Technology & Science Angallu (V), Madanapalle-517325 , Annamayya District, Andhra Pradesh, India
  • R. Kumar Professor Department of computer science Kristu jayanti college Bangalore
  • Naveenkumar P Assistant professor. Artificial intelligence and Data Science S.A. Engineering College
  • B. Venkataramanaiah Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, India
  • V. Malathy Associate Professor, Department of ECE SR University Warangal.

DOI:

https://doi.org/10.22399/ijcesen.1684

Abstract

In the evolving landscape of intelligent systems, ensuring transparency, fairness, and trust in artificial intelligence (AI) decision-making is paramount. This study presents a hybrid Explainable AI (XAI) framework that integrates rule-based models with deep learning techniques to enhance interpretability and trustworthiness in critical computing environments. The proposed system employs Layer-Wise Relevance Propagation (LRP) and SHAP (SHapley Additive exPlanations) for local and global interpretability, respectively, while leveraging a Convolutional Neural Network (CNN) backbone for accurate decision-making across diverse domains, including healthcare, finance, and cybersecurity. The hybrid model achieved an average accuracy of 94.3%, a precision of 91.8%, and an F1-score of 93.6%, while maintaining a computation overhead of only 6.7% compared to standard deep learning models. The trustworthiness index, computed based on interpretability, robustness, and fairness metrics, reached 92.1%, demonstrating significant improvement over traditional black-box models.This work underscores the importance of explainability in AI-driven decision-making and provides a scalable, domain-agnostic solution for trustworthy computing. The results confirm that integrating explainability mechanisms does not compromise performance and can enhance user confidence, regulatory compliance, and ethical AI deployment

References

[1] D. Gunning, “Explainable Artificial Intelligence (XAI),” Defense Advanced Research Projects Agency (DARPA), vol. 2, no. 2, pp. 1–36, 2017.

[2] W. Samek, T. Wiegand, and K.-R. Müller, “Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models,” arXiv preprint arXiv:1708.08296, 2017.

[3] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier,” in Proc. 22nd ACM SIGKDD, 2016, pp. 1135–1144. DOI: https://doi.org/10.1145/2939672.2939778

[4] S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Proc. NeurIPS, 2017, pp. 4765–4774.

[5] B. Kim, M. Wattenberg, and J. Gilmer, “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV),” in Proc. ICML, 2018.

[6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad, “Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-Day Readmission,” in Proc. ACM KDD, 2015. DOI: https://doi.org/10.1145/2783258.2788613

[7] D. Alvarez-Melis and T. S. Jaakkola, “On the Robustness of Interpretability Methods,” in arXiv preprint arXiv:1806.08049, 2018.

[8] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018. DOI: https://doi.org/10.1109/ACCESS.2018.2870052

[9] M. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021. DOI: https://doi.org/10.1109/TNNLS.2020.3027314

[10] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020. DOI: https://doi.org/10.1016/j.inffus.2019.12.012

[11] M. T. Ribeiro, S. Singh, and C. Guestrin, “Model-Agnostic Interpretability of Machine Learning,” in Proc. ICML Workshop on Human Interpretability, 2016.

[12] S. M. Lundberg et al., “From Local Explanations to Global Understanding with Explainable AI for Trees,” Nature Machine Intelligence, vol. 2, pp. 252–259, 2020. DOI: https://doi.org/10.1038/s42256-019-0138-9

[13] K. K. Patel, R. S. Rana, and S. Garg, “An Evaluation of SHAP and LIME Explainability for Text Classification,” Expert Systems with Applications, vol. 200, p. 116931, 2022.

[14] S. Bach et al., “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLOS ONE, vol. 10, no. 7, e0130140, 2015. DOI: https://doi.org/10.1371/journal.pone.0130140

[15] G. Montavon, W. Samek, and K.-R. Müller, “Methods for Interpreting and Understanding Deep Neural Networks,” Digital Signal Processing, vol. 73, pp. 1–15, 2018. DOI: https://doi.org/10.1016/j.dsp.2017.10.011

[16] Z. C. Lipton, “The Mythos of Model Interpretability,” Queue, vol. 16, no. 3, pp. 31–57, 2018. DOI: https://doi.org/10.1145/3236386.3241340

[17] C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” Nature Machine Intelligence, vol. 1, pp. 206–215, 2019. DOI: https://doi.org/10.1038/s42256-019-0048-x

[18] H. Chen, D. Zhang, and Z. Zhang, “Hybrid Attention Mechanism for Interpretable Deep Learning Models,” in Proc. AAAI, 2021.

[19] R. Caruana, “Explaining Explanations in AI,” AI Magazine, vol. 40, no. 1, pp. 18–19, 2019.

[20] F. Doshi-Velez and B. Kim, “Towards a Rigorous Science of Interpretable Machine Learning,” arXiv preprint arXiv:1702.08608, 2017.

Downloads

Published

2025-05-09

How to Cite

Bakkiyaraj Kanthimathi Malamuthu, Thripthi P Balakrishnan, R. Kumar, P, N., B. Venkataramanaiah, & V. Malathy. (2025). Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing. International Journal of Computational and Experimental Science and Engineering, 11(2). https://doi.org/10.22399/ijcesen.1684

Issue

Section

Research Article

Most read articles by the same author(s)