Artificial Intelligence is one of the most prominent terms in the current industry. Especially, since Deep Learning achieved state-of-the-art results in various tasks such as object detection or speech recognition. However, there is currently no perfect solution to interpret or explain Deep Neural Networks. There are some approaches, which try to explain the decisions of Neural Networks, e.g. LRP, and a few which are model agnostic, e.g. LIME. Finding good visualizations is a key to support the results these methods supply and to show it to an user.
Two of the possible tasks which are towards finding good explanations are:
- Finding visualizations for different explanation methods
- Comparing explanation methods or combine them visually
- Get familiar with current explanation approaches
- Design and implement a visual interface to provide
clues about the decision-making process of neural networks
(preferable also with Pytorch or Keras / Tensorflow, D3 or WebGL)
- Good knowledge about neural networks
(preferable but not fully necessary)
- Scope: Bachelor/Master
- Duration: 6 Month Project, 3 Month Thesis (Bachelor) / 6 Month Thesis (Master)
- Start: immediately
- Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015).
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.
- Ribeiro MT, Singh S, Guestrin C (2016) .
"Why Should I Trust You?": Explaining the Predictions of Any Classifier.
- Kindermans, P. J., Schütt, K. T., Alber, M., Müller, K. R., Erhan, D., Kim, B., & Dähne, S. (2017).
Learning how to explain neural networks: PatternNet and PatternAttribution.
- Simonyan, K., Vedaldi, A., & Zisserman, A. (2013).
Deep inside convolutional networks: Visualising image classification models and saliency maps.
- Erhan, D., Bengio, Y., Courville, A., & Vincent, P. (2009).
Visualizing higher-layer features of a deep network.