About me
- Head of XplaiNLP Research Group at the Department of Electrical Engineering and Computer Science, Quality and Usability Lab, Technische Universität Berlin
- Guest Researcher at German Research Center for Artificial Intelligence (DFKI), Speech and Language Technology (SLT) group
Research Interest of the XplaiNLP Research Group: Advancing Transparent and Trustworthy AI for Decision Support in High-Stakes Domains
At XplaiNLP research group, we are shaping the future of Intelligent Decision Support Systems (IDSS) by developing AI that is explainable, trustworthy, and human-centered. Our research spans the entire IDSS pipeline, integrating advances in natural language processing (NLP), large language models (LLM), explainability (XAI), evaluation, legal frameworks, and human-computer interaction (HCI) to ensure AI-driven decision-making aligns with ethical and societal values.
We focus on high-risk AI applications where human oversight is critical, including disinformation detection, social media analysis, medical data processing, and legal AI systems. Our interdisciplinary research tackles the following key challenges:
Our Methods: Advancing AI for Responsible Decision Support
We develop and refine AI methodologies that improve decision-making under uncertainty, including:
- Retrieval-Augmented Generation (RAG) & Knowledge Retrieval: Enhancing the factual accuracy and reliability of AI-generated content by integrating structured and unstructured knowledge sources.
- Natural Language Processing (NLP) & Large Language Models (LLM): Developing specialized language models for domain-specific tasks, with a focus on robustness, fairness, and generalization.
- Explainable AI (XAI) for NLP: Creating methods that enhance model interpretability and user trust, ensuring that AI explanations are meaningful, especially in high-stakes environments.
- Human-Computer Interaction (HCI) & Legal-AI Alignment: Designing AI systems that are usable, legally compliant, and human-centered, optimizing decision workflows for expert and non-expert users.
- Evaluation & Safety of AI Models: Establishing rigorous performance assessment frameworks to measure bias, reliability, and long-term impact of AI systems in real-world applications.
Our Applications: Tackling High-Risk AI Challenges
We apply our AI advancements to critical, real-world decision-making scenarios, including:
- Disinformation Detection & Social Media Analysis: Investigating misinformation, hate speech, and propaganda using advanced NLP and XAI methods. We analyze how AI-driven detection changes over time and how human perception of AI explanations evolves.
- Medical Data Processing & Trustworthy AI in Healthcare: Developing AI tools that simplify access to medical information, improve faithfulness and factual consistency in medical text generation, and support clinicians in interpreting AI-generated recommendations.
- Legal & Ethical AI for High-Stakes Domains: Ensuring AI decision support complies with regulatory standards, enhances explainability in legal contexts, and aligns with ethical AI principles.
Through interdisciplinary collaboration, hands-on research, and mentorship, XplaiNLP is at the forefront of shaping AI that is not only powerful but also transparent, fair, and accountable. Our goal is to set new standards for AI-driven decision support, ensuring that these technologies serve society responsibly and effectively.