Towards More Data Efficiency in Machine Learning

DESCRIPTION

IN SHORT

The grand challenge of this chair is to build theoretical and methodological foundations for deep learning models, and improve their robustness, their generalization capabilities, and their ability to learn discriminative representations without supervision.

SCIENTIFIC OBJECTIVES AND CONTEXT

Training deep neural networks when the amount of annotated data is small or in the presence of adversarial perturbations is challenging. More precisely, for convolutional neural networks, it is possible to engineer visually imperceptible perturbations that can lead to arbitrarily different model predictions. Such a robustness issue is related to the problem of regularization and to the ability to generalizing with few training examples. Our objective is to develop theoretically-grounded approaches that will solve the data efficiency issues of such huge-dimensional models.

OUR RESEARCH DIRECTIONS WILL BE ORGANIZED INTO FOUR MAIN AXES

  1. regularization of deep networks from a functional point of view;
  2. developing new simplicity principles for unsupervised learning of deep models;
  3. transfer learning and neural architecture search;
  4. pluri-disciplinary collaborations with different scientific fields.

ACTIVITIES

The chair is collaborating at the moment with the French companies Criteo and Valeo and with the international companies Criteo and Facebook, through several CIFRE contracts (7). Several collaborations have also been established with the Prairie institute, with Jean Ponce and Alexandre d’Aspremont, and with other MIAI chairs (Diane Larlus, Anatoli Juditsky).

CHAIR EVENTS

  • PAISS summer school 2021, July 2021, which attracted 300 students and researchers from all over the world
  • Talk given at MalGA Seminar (online), University of Genova, 2020
  • Keynote at ICT Innovations (online), Skopje, 2020.
  • Talk given at DataSig Seminar, Oxford, 2020.

SELECTED LIST OF PUBLICATIONS 

  • Gregoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal. An optimal transport kernel for feature aggregation. International Conference on Learning Representations (ICLR). 2021

  • B. Lecouat, J. Ponce and J. Mairal. Lucas-Kanade Reloaded: End-to-End Super-Resolution from Raw Image Bursts.International Conference on Computer Vision (ICCV). 2021.

  • M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski and A. Joulin. Emerging Properties in Self-Supervised Vision Transformers. to appear at the International Conference on Computer Vision (ICCV). 2021

  • G. Beugnot, J. Mairal, and A. Rudi. Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization. Adv. Neural Information Processing Systems (NeurIPS). 2021.

  • T. Bodrito, A. Zouaoui, J. Chanussot and J. Mairal. A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration. Adv. Neural Information Processing Systems (NeurIPS). 2021.

  • A. Betlei, E. Diemert, M. Reza-Amini. Uplift Modeling with Generalitzation Guarantees. ACM KDD. 2021.

  • Grégoire Mialon, Alexandre d'Aspremont, Julien Mairal. Screening Data Points in Empirical Risk Minimization via Ellipsoidal Regions and Safe Loss Functions. AISTATS 2020 - 23rd International Conference on Artificial Intelligence and Statistics, Jun 2020, Palermo / Virtual, Italy.

  • Dexiong Chen, Laurent Jacob, Julien Mairal. Convolutional Kernel Networks for Graph-Structured Data. ICML 2020 - Thirty-seventh International Conference on Machine Learning, Jul 2020, Vienna, Austria.

  • Bruno Lecouat, Jean Ponce, Julien Mairal. A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding. Conference on Neural Information Processing Systems (NeurIPS), Oct 2020, Vancouver, France.

  • Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS), Dec 2020, Virtual-only, United States.

  • Nikita Dvornik, Cordelia Schmid, Julien Mairal. Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification. ECCV 2020 - European Conference on Computer Vision,

  • Bruno Lecouat, Jean Ponce, Julien Mairal. Fully Trainable and Interpretable non-local sparse models for image restoration. ECCV 2020 - European Conference on Computer Vision.

  • Gregoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal. An optimal transport kernel for feature aggregation. International Conference on Learning Representations (ICLR). 2021

  • Andrei Kulunchakov, Julien Mairal. Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise. Journal of Machine Learning Research, 2020, 21 (155), pp.1-52.

  • Andrei Kulunchakov, Julien Mairal. A Generic Acceleration Framework for Stochastic Composite Optimization. NeurIPS 2019

  • Thirty-third Conference Neural Information Processing Systems, Dec 2019, Vancouver, Canada.

  • Alberto Bietti, Julien Mairal. On the Inductive Bias of Neural Tangent Kernels - NeurIPS 2019. Thirty-third Conference on Neural Information Processing Systems, Dec 2019, Vancouver, Canada.

  • Dexiong Chen, Laurent Jacob, Julien Mairal. Recurrent Kernel Networks. NeurIPS 2019 - Thirty-third Conference on Neural Information Processing Systems, Dec 2019, Vancouver, Canada.
Published on  January 9, 2024
Updated on January 9, 2024