Axis 2: Embedded and distributed AI and hardware architecture for AI

Future AI has to get off the cloud, in order to meet its users, and overcome problems related to communication overload and data privacy. Hardware architectures for AI (i.e. Neuro Processing Unit - NPU) is a key topic to address new applications, embedded in low power, low latency apparatus (cars, healthcare wearable devices or event smart sensors). A specific program will address related research topics.
At the same time, a new IT paradigm, mixing Edge/Fog/Cloud computing and IoT, requires advanced resource management. Distributed Intelligence is an emerging topic, which will allow optimizing distributed applications, including distributed learning.
Building upon the scientific effort presented in axis 1, both research topics will make use of on-line, unsupervised, incremental, and “under constraint” learning in order to provide adaptivity to the environment, customization to the users, and system efficiency.

2.1. Neuro-processing units

Two chairs explore variants of NPU. All the local expertise in semiconductors, non-volatile memory technologies, spikebased or digital design, innovative circuit and architecture design, will be used to find the best trade-off in terms of software flexibility, hardware scalability, power efficiency, throughput and latency.

  • Hardware for spike-coded neural networks exploiting hybrid CMOS non-volatile technologies - Laurena Anghel & Alexandre Valentian
  • Digital Hardware AI Architectures - Frédéric Pétrot

2.2. Distributed intelligence

This programme will leverage the implementation of AI applications by means of optimised algorithms and adequate orchestration software for managing the resources of the distributed platforms. Its main objective is to be able to process the huge amount of data produced by local data analytics and distributed AI.

  • Edge intelligence - Denis Trystram