Explainable AI: Decision Visualization and Guidance for Seq2Seq Models

Theoretical (Analytical):

Practical (Implementation):

Literature Work:


While feedforward neural networks can only process single datapoints with a fixed size, sequence-to-sequence (seq2seq) models can translate sequences of varying length from one domain to another. This makes them suitable for tasks like natural language processing (NLP) or the processing of time series.

Similar to other neural network architectures, seq2seq models output a probability distribution indicating the likelihood of a prediction. To process a given input sequence, the sequence is fed in multiple steps into the model. At each step, the model outputs the most likely prediction and appends it to the output sequence.

At some points, the network might be unsure on a prediction. This is reflected in the probability distribution of all possible outcomes: two or more options might show a similar likelihood. Visualizing the decision process of a seq2seq model can give insights in such situations and allow the user to interactively steer the network output according to his needs.


  • Get familiar with common seq2seq models for NLP
  • Build a network which can predict the next word for a given text
  • Come up with a visualization to explain the decision process at each step
  • Add user interaction to allow a manipulation of the decision process


  • Programming skills in Python, Javascript
    (preferably also with Pytorch or Tensorflow, D3)
  • Basic knowledge of neural networks 
    (preferably RNNs)


  • Scope: Bachelor/Master
  • Duration: 6 Month Project, 3 Month Thesis (Bachelor) / 6 Month Thesis (Master)
  • Start: immediately



  1. Strobelt, H., Gehrmann, S., Behrisch, M., Perer, A., Pfister, H., & Rush, A. M. (2019). Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models. IEEE transactions on visualization and computer graphics, 25(1), 353-363.