Verbalization and Visualization for Explainable AI

Theoretical (Analytical):

Practical (Implementation):

Literature Work:


Overview and Problem Statement

A challenge of Artificial Intelligence is in the circumstance that frequently outcomes of Machine Learning algorithms cannot be easily interpreted by the end users. Thus, explanations of decisions made by the model are desired as they can provide more transparency and increase the user's trust in the system. In addition to visualization, verbalization can be one possible solution to explain the model's made decisions. It uses a structured language as input and converts the data into a natural language representation. Thus, it can be applied to summarize model's made decisions, by converting model's parameters and generated rules in a written narrative.

Tasks

  • Develope an algorithm/model to automatically generate reports in the natural language which summarize decisions made by a machine learning algorithm.
  • Deal with different levels of machine learning algorithms appropriately (e.g., decision tree vs. neural network).
  • Create a framework to show the results of different algorithms/models visually.

Requirements

  • Good knowledge about information visualization and natural language processing.
  • Good programming skills in Java and JavaScript/D3.

Scope/Duration/Start

  • Scope: Master
  • 6 Month Project, 6 Month Thesis
  • Start: immediately

Contact

  • Rita Sevastjanova