Download PDFOpen PDF in browser

Bridging the Gap: How Neuro-Evolutionary Methods Enhance Explainable AI

EasyChair Preprint 14329, version 2

Versions: 12history
9 pagesDate: September 1, 2024

Abstract

In the evolving landscape of artificial intelligence (AI), the need for explainable AI (XAI) has become increasingly critical, particularly in high-stakes domains where decisions must be transparent and interpretable. This article explores the intersection of neuro-evolutionary methods and XAI, highlighting how the former can bridge the gap between complex AI models and their comprehensibility. Neuro-evolutionary algorithms, which simulate the process of natural selection to optimize neural networks, offer a unique approach to enhancing the explainability of AI systems. By evolving neural architectures that are inherently more interpretable, these methods can produce models that are not only accurate but also understandable by human stakeholders. This paper delves into the mechanisms by which neuro-evolutionary techniques contribute to XAI, presenting case studies and examples from various applications. Furthermore, it discusses the potential benefits, challenges, and future directions of integrating neuro-evolutionary approaches in the development of explainable AI, ultimately aiming to foster greater trust and adoption of AI technologies across different sectors.

Keyphrases: AI Interpretability, AI Transparency, AI explainability, Artificial Intelligence., Evolutionary Algorithms, Explainable AI (XAI), Glass Box Models, Model Interpretation, Neuro-Evolutionary Methods, black-box models, machine learning, neural networks

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14329,
  author    = {Adeoye Ibrahim},
  title     = {Bridging the Gap: How Neuro-Evolutionary Methods Enhance Explainable AI},
  howpublished = {EasyChair Preprint 14329},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser