4th MIAI Deeptails Seminar on October 20th, from 2PM CET

from September 19, 2022 to October 20, 2022

From 2PM CET
We are pleased to share with you the fourth MIAI Deeptails Seminar with Meike Nauta (University of Twente in Enschede, the Netherlands and the Institute of AI in Medicine in Essen, Germany).

Towards Interpretable Computer Vision and Thorough Evaluation of Explanations

ABSTRACT

Explainable AI methods aim to explain black-box machine learning models and thereby give users insight into the model's decision-making process. But what makes a good explanation and how do we know whether the explanation is correct and complete? And wouldn’t it be better to build interpretability directly into the predictive model? In this talk, Meike Nauta will present 12 desired properties for a good explanation and provides an extensive collection of quantitative evaluation methods for explainable AI. Secondly, she will present ProtoTree: an interpretable image classifier where prototypical parts learned by a neural network are combined in an interpretable decision tree.

Meike Nauta

image

Meike Nauta is finalizing her PhD on explainable AI and interpretable computer vision. She is affiliated with the University of Twente in Enschede, the Netherlands and the Institute of AI in Medicine in Essen, Germany. She received her Master's degree with distinction in Computer Science at the University of Twente, and her master thesis was awarded as best computer science thesis of the Netherlands. Her view on explainable AI is that the representational power of neural networks should be used to move from post-hoc explainability to interpretability by design.


 

Published on September 27, 2022