Towards More Data Efficiency in Machine Learning

Description

In short

The grand challenge of this chair is to build theoretical and methodological foundations for deep learning models, and improve their robustness, their generalization capabilities, and their ability to learn discriminative representations without supervision.

Scientific objectives and context

Training deep neural networks when the amount of annotated data is small or in the presence of adversarial perturbations is challenging. More precisely, for convolutional neural networks, it is possible to engineer visually imperceptible perturbations that can lead to arbitrarily different model predictions. Such a robustness issue is related to the problem of regularization and to the ability to generalizing with few training examples. Our objective is to develop theoretically-grounded approaches that will solve the data efficiency issues of such huge-dimensional models.

Our research directions will be organized into four main axes

  1. regularization of deep networks from a functional point of view;
  2. developing new simplicity principles for unsupervised learning of deep models;
  3. transfer learning and neural architecture search;
  4. pluri-disciplinary collaborations with different scientific fields.

Activities

The chair is collaborating at the moment with the French companies Criteo and Valeo and with the international companies Criteo and Facebook, through several CIFRE contracts (7). Several collaborations have also been established with the Prairie institute, with Jean Ponce and Alexandre d’Aspremont, and with other MIAI chairs (Diane Larlus, Anatoli Juditsky).

Chair events

  • PAISS international summer school, July 2020, was canceled due to COVID.
  • Talk given at MalGA Seminar (online), University of Genova, 2020
  • Keynote at ICT Innovations (online), Skopje, 2020.
  • Talk given at DataSig Seminar, Oxford, 2020.

Scientific publications

  • B. Lecouat, J. Ponce and J. Mairal. Designing and Learning Trainable Priors with Non-Cooperative Games. Adv. Neural Information Processing Systems (NeurIPS). 2020.
     
  • M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, A. Joulin. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. Adv. Neural Information Processing Systems (NeurIPS). 2020.
     
  • A. Kulunchakov and J. Mairal. Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise. Journal of Machine Learning Research (JMLR) 21(155), pages 1–52, 2020.
     
  • B. Lecouat, J. Ponce and J. Mairal. Fully Trainable and Interpretable Non-Local Sparse Models for Image Restoration. European Conference on Computer Vision (ECCV). 2020.
     
  • N. Dvornik, C. Schmid and J. Mairal. Selecting Relevant Features from a Multi-Domain Representation for Few-shot Classification. European Conference on Computer Vision (ECCV). 2020.
     
  • D. Chen, L. Jacob and J. Mairal. Convolutional Kernel Networks for Graph-Structured Data. International Conference on Machine Learning (ICML). 2020.
     
  • G. Mialon, A. d'Aspremont and J. Mairal. Screening Data Points in Empirical Risk Minimization via Ellipsoidal Regions and Safe Loss Functions. International Conference on Artificial Intelligence and Statistics (AISTATS). 2020.
     
  • D. Chen, L. Jacob and J. Mairal. Recurrent Kernel Networks. Adv. Neural Information Processing Systems (NeurIPS). 2019.
     
  • A. Kulunchakov and J. Mairal. A Generic Acceleration Framework for Stochastic Composite Optimization. Adv. Neural Information Processing Systems (NeurIPS). 2019.
     
  • A. Bietti and J. Mairal. On the Inductive Bias of Neural Tangent Kernels. Adv. Neural Information Processing Systems (NeurIPS). 2019.