BIFOLD Project: FakeXplain

Published:

The FakeXplain project aims to examine different approaches to generating explanations of AI systems used for human-AI collaboration for the disinformation detection task. Explainability approaches such as attention-based explanations, attribution-based explanations, and computational argumentation will be examined throughout the project, but also recent approaches such as Chain-of-Thought Prompting and Mechanistic Interpretability will be examined concerning their meaningfulness and understandability to human users. Project partners.

Grant Duration: 2024-04 to 2027-03

URL: https://www.bifold.berlin/

Project partners

Research: Prof. Konrad Rieck, TU Berlin; Prof. Wojciech Samek Fraunhofer Heinrich Herz Institut; Prof. Meyer from the Tel Aviv University

Funding:

Berlin Institute for the Foundations of Learning and Data - BIFOLD (Berlin, Berlin, DE)

Funding Scheme Agility Projects