Type and Duration
FFF-Förderprojekt, November 2024 until October 2026Coordinator
Data Science & Artificial IntelligenceMain Research
Humanities, Cultural and Social StudiesDescription
This project seeks to address the significant limitations of deep learning (DL) models in causal explainability and robustness in out-of-distribution contexts by exploring the integration of Generative AI (GenAI) into Causal Machine Learning (CML). The project focuses on leveraging GenAI to identify highlevel, causally relevant variables and formulate causal hypotheses, addressing challenges that currently require expert intervention or costly experimental procedures. Through a series of targeted work packages, the project will develop and evaluate a scalable CML pipeline incorporating GenAI, with applications in diverse domains such as healthcare, policy-making, and finance. The anticipated outcomes include ad-vancements in causal inference methodologies and contributions to both academic literature and practical applications.Practical Application
This project addresses key limitations of current deep learning architectures, particularly in achieving causal explainability and maintaining robustness in out-of-distribution scenarios, i.e., contexts where the data distribution differs from that on which a model was originally trained. By integrating Generative AI (GenAI) into a scalable Causal Machine Learning (CML) pipeline, this research aims to automate the identification of causally relevant variables and the formulation of causal hypotheses, which currently rely on costly experiments or expert intervention, while preserving the predictive accuracy of current deep learning architectures.This approach has the potential to enhance not only the interpretability of AI models but also their adaptability across diverse application areas. The project is particularly impactful in high-stakes fields such as healthcare, policy-making, and finance, where reliable, causally informed AI can support sound decision-making, optimize interventions, and increase trust in AI-driven outcomes. For example, in healthcare, this approach could help clinicians better understand why certain treatments are more effective for specific patient groups, supporting more personalized and effective medical care. In finance, this approach could refine risk assessment processes by identifying causal factors that drive market behavior, contributing to enhanced financial stability. In public policy, such a framework could elucidate the causal impact of interventions, like the effects of new educational policies, enabling policymakers to make evidence-based decisions that benefit society.
Reference to Liechtenstein
This project aligns with the University of Liechtenstein's research strategy by addressing critical and increasingly relevant challenges in the fields of data science and artificial intelligence, specifically focusing on causal explainability and model robustness. By leveraging the emerging capabilities of Generative AI, the project aims to contribute to these fields while addressing real-world problems in high-impact sectors such as healthcare, policy-making, and finance. For Liechtenstein, in particular, this research holds significant relevance as it supports the country's growing emphasis on fostering innovation in AI-driven industries. Furthermore, the project will actively engage with the local community through workshops, facilitating academic exchange, and strengthening regional industrial collaborations. This approach not only advances cutting-edge research but also ensures that the outcomes bring tangible benefits to both the academic community and the local economy, aligning with the university's mission to drive forward research that has practical, localized impact.Keywords
Generative AI, AI Transparency