Overview and Problem Statement
Artificial Intelligence (AI) has become ubiquitous in our modern lives: machine learnign (ML) algorithms control social networks and smart assistants, recommend music and movies to watch, and even steer cars. Consequently, an increasing need for explanations is becoming apparent. Why are the algorithms behaving the way they are? Are they trustworthy, biased, fair, and/or accountable? Explainable Artificial Intelligence (XAI) methods attempt to answer these questions by providing text or visualizations explaining the inner workings of ML models.
However, it is not clear whether the provided explanations are actually helpful. Do they actually explain the inner workings of ML models, or just produce output that "sounds good" but is not faithful to the models decision making processes?
The aim of this project is the development of a framework for fast and easy creation of study setups that can evaluate questions from the previously introduced space.
- Scope: Bachelor/Master
- 6 Month Project, 3 Month Thesis (Bachelor) / 6 Month Thesis (Master)
- Start: immediately