Deep artificial neural networks have become the flagship algorithms of artificial intelligence, with achievements now routinely highlighted even in the general press. In the first part of the talk, we will discuss the basic principles allowing these algorithms to perform so well. We will also emphasize their limitations, and the fundamental reason for which computers are extremely energy inefficient at computing them. We will then see that realizing artificial neural networks in hardware, exploiting the physics of electron nanodevices, can allow us to overcome this inefficiency, although some major limitations remain when it comes to learning. We will then present some leads of highly bioinspired neural networks, that can be implemented naturally using device physics, and which are trying to learn in a way much more similar to the brain than to traditional artificial neural networks. These results pave the way towards low energy artificial intelligence, but also interrogate the way to exploit device physics in electronics.
Centre de Nanosciences et Nanotechnologies
Centre National de la Recherche Scientifique, Orsay