• Explainable AI for Time Series

    Visual Explainable AI for Time Series makes deep learning models on time series data more transparent. It combines attribution methods (highlighting which parts of the data influence predictions) with counterfactuals (what-if scenarios that change outcomes). The project covers the extraction, evaluation, and communication of explanations from time series models, supported by interactive visual tools like ICFTS and DAVOTS. Moving beyond simple heatmaps, it offers clearer, more actionable insights—bridging AI complexity and human understanding. More Information: Website or Github