The principles and methodology of causal modeling will significantly enhance the robustness, foundation, and interpretability of the AI cycle (from data to models, from models to decisions, and from decisions to new data). Within this framework, our focus will be on several key aspects. Initially, we will prioritize the development of a causal representation. Current propositions often involve learning unsupervised representations that decrease dimensionality but sacrifice the interpretability of variables. We aim to address this issue by considering interpretable meta-variables. Furthermore, we aim to investigate the relationship between feature selection in statistics and causal inference. For instance, we plan to compare the features selected by the Lasso estimator and the Markov blanket associated with a causal graph. Can we combine both? Which one is preferable for a robust prediction? Lastly, we intend to conduct an in-depth analysis of the robustness in relation to out-of-distribution scenarios. This will involve a theoretical control of the generalization error on a test set that is significantly different from the training set.


1 postdoc (Daria Bystrova) and 1 PhD (Théotime Le Goff) from Nov. 2023

A visit to the Copenhagen Causality Lab in spring 2024

Published on  January 9, 2024
Updated on January 9, 2024