Download PDFOpen PDF in browser

LLM for Explainable AI

EasyChair Preprint 15189

5 pagesDate: October 3, 2024

Abstract

Explainability for large language models (LLMs) is a critical area in natural language processing because it helps users understand the models and makes it easier to analyze errors, especially as these models are widely used in various applications. The "black-box" nature of AI models raises challenges in transparency and ethics because we cannot see or understand how the model processes information to generate its output. Traditional methods, such as attention mechanisms, have enhanced explainability in AI models by improving model focus and accuracy but at the cost of increased complexity. More specifically, they use tools like gradient-based methods (e.g., Grad-CAM), making them less accessible to non-expert users. We employ in-context learning and prompt refinement techniques, focusing on the pre-trained Transformer-based large language model BART. This approach simplifies model interaction by allowing users to guide the model through natural language prompts, reducing the need for technical expertise. We validate this method via a real-life StudentLife dataset collected 48 college students over 10 weeks. Our results offer the possibility of using LLMs for XAI to achieve data mining for everyone.

Keyphrases: Data Mining, Explainability, Explainable AI (XAI), In-Context Learning, Large Language Models (LLMs), Prompt Engineering, feature importance, language model, refined specific prompt

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15189,
  author    = {Ahsan Bilal and Beiyu Lin},
  title     = {LLM for Explainable AI},
  howpublished = {EasyChair Preprint 15189},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser