Posts by Collection

portfolio

BMBF Project: news-polygraph

With 10 partners from industry, research, and media outlets together, we will develop a multimodal disinformation detection platform for different journalistic use cases.

projects

BMBF Project: VERANDA

Project Duration: from 2024-03 until 2026-09

Funding Amount: EUR 321.598,51 (total funding: 1.998.825)

Trustworthy anonymization of sensitive patient data for remote consultations.

BMBF Project: news-polygraph

Project Duration: from 2023-05 until 2026-04

Funding Amount: EUR 1.492.547 (total funding: 13.216.784,8)

news-polygraph - Multimodal Orchestration for Media-Content Verification

BIFOLD Project: FakeXplain

Project Duration: from 2024-05 until 2027-04

Funding Amount: EUR 269.100,36

FakeXplain – Development of transparent and meaningful explanations for disinformation detection

publications

Implications of Regulations on Large Generative AI Models in the Super-Election Year and the Impact on Disinformation

Published in Vera Schmitt, Jakob Tesch, Eva Lopez, Tim Polzehl, Aljoscha Burchardt, Konstanze Neumann, Salar Mohtaj, and Sebastian Möller (2024). Implications of Regulations on Large Generative AI Models in the Super-Election Year and the Impact on Disinformation. In Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024, pages 28–38, Torino, Italia. ELRA and ICCL., 2024

Access paper here

Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-Checking

Published in Vera Schmitt, Balázs P. Csomor, Joachim Meyer, Luis-Felipe Villa-Areas, Charlott Jakob, Tim Polzehl, and Sebastian Möller (2024). Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-Checking. In Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD24). Association for Computing Machinery, New York, NY, USA, 91–100., 2024

Access paper here

The Role of Explainability in Collaborative Human-AI Disinformation Detection

Published in Vera Schmitt, Luis-Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P. Spang, and Sebastian Möller (2024). The Role of Explainability in Collaborative Human-AI Disinformation Detection. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT24). Association for Computing Machinery, New York, NY, USA, 2157–2174., 2024

Access paper here

Research concerning NLP

Published in , 2024

ITG Position Paper:
Prof. Rainer Martin, Prof. Stefan Brüggenwirth, Dr. Aljoscha Burchard, Prof. Tim Fingscheidt, Prof. Holger Hoos, Prof. Klaus Illgner, Dr. Henrik Junklewitz, Prof. André Kaup, Dr. Katharina von Knop, Dr. Joachim Köhler, Prof. Gitta Kutyniok, Prof. Dorothea Kolossa, Prof. Sebastian Möller, Dr. Ralf Schlüter, Dr. David Thulke, Dr. Vera Schmitt, Prof. Ingo Siegert, and Dr. Volker Ziegler (2024). “Large Language Models are Transformers in Artificial Intelligence, Industry, Education, and Society”, VDE Verband der Elektrotechnik Elektronik Informationstechnik e. V.:
Access paper here

talks

Panel Discussion: Big Data – Chance oder Risiko

Published:

Eine Podiumsdiskussion zum Thema Verantwortung in Zeiten der Digitalisierung: „Big Data – Chance oder Risiko“ mit Konstantin von Notz, dem netzpolitischen Sprecher der GRÜNEN; Nikolaus Blome, stellvertretender Chefredakteur der BILD-Zeitung; Vera Schmitt, die Datenanalyse zu wohltätigen Zwecken bei CorrelAid durchführt und Denny Vorbrücken, Geschäftsführer im Bund Deutscher Kriminalbeamter Beayreuther Dialoge. Zusätzlich hate Vera Schmitt ein Seminar zum Thema “Data for Good: wie wir Datenkompetenzen für wohltätige Zwecke einsetezn können” gegeben.

Panel Discussion: Usable Security and Privacy Day 2021

Published:

The panel discussion was organized by Prof. Möller and Vera Schmitt from TU Berlin, with Prof. Ahmad-Reza Sadeghi from TU Darmstadt, Prof. Simone Fischer-Hübner from Karlstad University, Prof. Joachim Meyer from Tel Aviv University, Dr. Aljoscha Burchardt senior researcher at DFKI, and CTO Philipp Berger co-founder of neXenio as panelists More Information.

Invited Talk: Do explanations make a difference? The influence of human-centered explanations on collaborative human-AI disinformation detection

Published:

teaching

Teaching Summer Term

Undergraduate course, Technische Universität Berlin, Quality and Usability Lab

Click on the title to get an overview of the teaching activities during Summer Term.

Teaching Winter Term

Undergraduate and graduate courses, Technische Universität Berlin, Qaulity and Usability Lab

Click on the title to get an overview of the teaching activities during Winter Term.