Artificial neurons are a million times faster than our neurons

⇧ [VIDÉO] You may also like this partner content (after advertising)

Producing artificial neurons that are more efficient than human neurons… The idea is not new. But researchers at MIT have taken it to a whole new level. They claim to have created an artificial neural network capable of running a million times faster than humans.

This feat could have been achieved using an “analog” neural network. But what is the purpose of creating artificial neurons? To understand this, we must actually return to the concept of a “neural network”. If we take the definition of the Brain Research Consortium, neurons can be considered ” basic unit of work “From the brain. They are specialized cells. They transmit information to other neurons, depending on their area of ​​specialization. They generally consist of:

  • dendriteswhich receives a nerve signal
  • soma The cell body that decodes
  • axonwho transmits

These neurons are connected to each other by synapses that connect the axon and dendrites. They communicate via electrical signals called “action potentials”: this is what releases neurotransmitters. The latter are “chemical messengers” responsible for passing through synapses to transmit information. So we have a normal neural network.

An artificial neural network is related to the field commonly referred to as “artificial intelligence”. It is actually a system that is “fed” with a large amount of data to “learn” and extract logical connections in light of a particular goal. These learning methods are inspired by the work of biological neurons, which is why we are talking about an “artificial neural network”.

An educational system inspired by biological neurons

In fact, the transmitted data is propagated in an artificial “network” of neurons, generally hypothetical. They are actually points in the network that are linked together by computer code (interlaced in some way). So this network receives incoming information and training data and transmits outgoing information.

In both cases, we find the phenomenon of “learning” which involves data processing. In our (biological) brain, connections between neurons, and synapses are strengthened or weakened through experience and learning. In an artificial neural network, the principle is somewhat similar: connections between network points are weighted according to the processing of a large amount of data. This is why we are talking about deep learning.

The novelty that the scientists introduced here is a neural network that does these calculations very quickly, and with little power needs. For this, they made it clear that they did not rely on a digital neural network, but an analog. So let’s go back to the difference between analog and digital.

Analog and digital are two different operations. Both allow data to be transmitted and stored. For example, sound, image, video… The analog system appeared from the beginning of electricity. On the other hand, the digital figure appeared with the computer. In the analog system, the basic principle is to reproduce the signal to be recorded in an analogous form.

digital and analog

For example, analog television worked on this principle. The image to be transmitted was converted into electrical signals, which were called a “video signal”, and were distinguished by their frequency, that is, the number of oscillations in one second. These electrical signals were retransmitted via an electromagnetic wave created to follow the same amplitude as the original signal. So the transmitted signal is a kind of “reproduction” of the original signal.

In the digital system, the signal to be recorded is converted into a sequence of 0 and 1, so the amplitudes are no longer reproduced, but are encoded and decoded upon arrival. This has changed with the switch to digital television, as the video below demonstrates well.

In the digital system, we get a signal of two amplitudes instead of infinity in analog. So far, artificial neural networks mostly work on the digital principle. Therefore, the network weights are programmed using learning algorithms, and the calculations are performed using sequences of 0 and 1. However, by applying an analog system MIT scientists were able to create a neural network, according to them, faster and more efficient than humans. A million times faster, to be exact.

In an analog deep learning system, it is therefore not important to transfer data in the form of 0 and 1, but ” Increasing and decreasing the electrical conductivity of the proton impedance that enables machine learning, reads the MIT statement. Conduction is defined as the ability to allow current to flow (opposite to resistance). ” Conduction is controlled by the motion of the protons. To increase conduction, more protons are pushed into the channel of the resistor, while to decrease conduction, the protons are removed. This is accomplished by using an electrolyte (similar to the one in a battery) that conducts protons, but blocks electrons. “.

Electrical resistance is a physical property of a material that limits the flow of electric current in a circuit. So a component with this property is used to limit the passage of electrons in the circuit. In the present case, it is thus a key element, since it is what regulates the movement of the protons.

Strong resistance to electrical impulses

Why does this process allow for faster running of the neural network? ” First, the computation is performed in memory, so huge data loads are not transferred from memory to the processor Scientists explain. Analog processors also perform operations in parallel. If the size of the matrix increases, then the analog processor will not need more time to perform new operations, because all calculations occur simultaneously “.

Thus the velocity reached in nanoseconds is calculated. If that’s possible, it’s also because the scientists used a specific material: inorganic phosphosilicate glass (PSG), a material similar to that found in dryer bags. This material is a very good conductor, because it has many nanometer pores that allow the passage of protons, while being able to withstand high pulsed electrical voltages. This quality was necessary according to the scientists, because it is this force that allows them to apply a greater electrical voltage, and thus obtain such a high speed.

The action potential in biological cells rises and falls on a time scale of milliseconds, because a potential difference of about 0.1 V is constrained by the stabilization of water Lead author Joe Lee, Battelle Energy Alliance Professor of Nuclear Science and Engineering and Professor of Materials Science and Engineering, says, Here we apply up to 10 V through a special nano-thick solid glass film that conducts protons, without permanently damaging it. The stronger the field, the faster the ionic devices “.

Scientists hope they can redesign this system to make it suitable for mass manufacturing. They pin high hopes on this progress: “ Once an analog processor is developed, it is no longer necessary to train networks that everyone is running on, but networks of unprecedented complexity, which no one else can afford, bypassing anything that was previously possible. In other words, it’s not a faster car, it’s a spacecraft. Adds Murat Onen, lead author and postdoctoral fellow at the Massachusetts Institute of Technology.

Source: Science

Leave a Comment

Your email address will not be published.