Digital Hardware AI Architectures


The Digital Hardware AI Architecture chair focuses on the integration of highly energy efficient hardware/software architectures to implement AI tasks in general, and deep neural networks in particular.

The main challenges address the tight integration of AI accelerators in software-intensive systems bearing in mind a wealth of non-functional requirements: low to very low power consumption, easy system-level co-processor usage, results reproducibility, real-time and low latency computations, AI functions virtualization for deployment on diverse execution platforms, compatibility with academic or industrial machine learning frameworks, etc.


Cooperation between University of Salerno, Italy, STMicroelectronics, Agrate, IT, and the Chair to work on tiny binary neural netwoks.
A Cifre PhD has started in cooperation with STMicroelectronics, Crolles, on an ultra-low power TPU.
The idea of a TCAM based ternary neural network accelerator is under investigation, to build an high-efficiency very low power matrix multiplication engine.
Compression of table based implementation of complex functions (exponentiation, logarithms, trigonometric, etc) for FPGA is being studied as a solution to maximize FPGA usage for convolution and activation functions computation.
Finally, an FPGA backend for Pytorch is being elaborated, to ease the hardware design exploration of network architectures.

Chair events

Invited speech at the Applied Machine Learning Days, in Lausanne, Swiss, January 25-29, 2020.

Scientific publications

  • A. De Vita et al : Low Power Tiny Binary Neural Network with improved accuracy in Human Recognition Systems. DSD 2020: 309-315. Best paper candidate.
  • O. Muller et al: Efficient Decompression of Binary Encoded Balanced Ternary Sequences. IEEE Trans. Very Large Scale Integr. Syst. 27(8): 1962-1966 (2019)
  • A. Prost-Boucle et al. : High-Efficiency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression. ACM Trans. Reconfigurable Technol. Syst. 11(3): 15:1-15:24 (2018)
  • L. Andrade et al: Overview of the state of the art in embedded machine learning. DATE 2018: 1033-1038
  • H. Alemdar et al. :Ternary neural networks for resource-efficient AI applications. IJCNN 2017: 2547-2554