Explainable Generative AI (GenXAI): a survey, conceptualization, and research agenda

back to overview

Reference

Schneider, J. (2024). Explainable Generative AI (GenXAI): a survey, conceptualization, and research agenda. Artificial Intelligence Review, 57(289).

Publication type

Article in Scientific Journal

Abstract

Generative AI (GenAI) represents a shift from AI’s ability to “recognize” to its ability to “generate” solutions for a wide range of tasks. As generated solutions and applications grow more complex and multi-faceted, new needs, objectives, and possibilities for explainability (XAI) have emerged. This work elaborates on why XAI has gained importance with the rise of GenAI and the challenges it poses for explainability research. We also highlight new and emerging criteria that explanations should meet, such as verifiability, interactivity, security, and cost considerations. To achieve this, we focus on surveying existing literature. Additionally, we provide a taxonomy of relevant dimensions to better characterize existing XAI mechanisms and methods for GenAI. We explore various approaches to ensure XAI, ranging from training data to prompting. Our paper provides a concise technical background of GenAI for non-technical readers, focusing on text and images to help them understand new or adapted XAI techniques for GenAI. However, due to the extensive body of work on GenAI, we chose not to delve into detailed aspects of XAI related to the evaluation and usage of explanations. Consequently, the manuscript appeals to both technical experts and professionals from other fields, such as social scientists and information systems researchers. Our research roadmap outlines over ten directions for future investigation.

Persons

Organizational Units

  • Liechtenstein Business School

DOI

http://dx.doi.org/https://doi.org/10.1007/s10462-024-10916-x