Trust in Explanations of Artificial Intelligence

Theoretical (Analytical):

Practical (Implementation):

Literature Work:

Overview and Problem Statement

Artificial Intelligence (AI) has become ubiquitous in our modern lives: machine learnign (ML) algorithms control social networks and smart assistants, recommend music and movies to watch, and even steer cars. Consequently, an increasing need for explanations is becoming apparent. Why are the algorithms behaving the way they are? Are they trustworthy, biased, fair, and/or accountable? Explainable Artificial Intelligence (XAI) methods attempt to answer these questions by providing text or visualizations explaining the inner workings of ML models.

However, it is not clear whether the provided explanations are actually helpful. Do they actually explain the inner workings of ML models, or just produce output that "sounds good" but is not faithful to the models decision making processes?

The aim of this project is the development of a framework for fast and easy creation of study setups that can evaluate questions from the previously introduced space.

Problem Statement

  • How can we automatically derive ability levels and select appropriate visualization methods for musical pieces?
  • How can we predict fitting next lectures based on current performance during virtual piano lessons?


  • Review current evaluations of trust in artificial intelligence
  • Develop a framework that supports comparative in-lab studies
  • Identify interesting variables from the XAI evaluation design space and conduct comparative evaluations


  • Interest in evaluation of XAI
  • Previous knowledge of (cognitive/perceptual/...) biases is a plus, but not required


  • Scope: Bachelor/Master
  • 6 Month Project, 3 Month Thesis (Bachelor) / 6 Month Thesis (Master)
  • Start: immediately