The Nobel Committee for Physics took everyone by surprise this year. In recognizing, on Tuesday, October 8, two pioneers of “artificial neural networks,” the American John Hopfield (91) and the British Geoffrey Hinton (76), it acknowledged the current movement in artificial intelligence, which has been more readily associated with computer science.
“It’s a recognition that one branch of physics, statistical physics, has made the effort to reach out to other fields. It’s good news,” said Rémi Monasson, a CNRS (French National Center for Scientific Research) researcher at the ENS Physics Laboratory in Paris. Stéphane Mallat, professor at the Collège de France, hailed the prize as “surprising” and noted that, in return, artificial intelligence has been helping physicists a great deal these days, through imaging, modeling and simulations.
It’s hard to discern the physics behind the words written by ChatGPT, the images created by Midjourney, the videos generated by Sora or the brilliant Go moves of AlphaGo. The fact that one of the two winners, Hinton, is a computer scientist and neuroscientist − not a physicist − hasn’t helped either. And yet…
Turning a network into a memory
The most talked-about artificial intelligence systems at the moment belong to the category of machine learning, and more precisely to the sub-category that uses the mathematical model of artificial neural networks, a digital assembly of active and inactive neurons, linked together to varying degrees. In the 1980s, Hopfield, then at Caltech University (California), and Hinton, at Carnegie-Mellon University (Pennsylvania), demonstrated independently that this technology, mathematically analogous to the human brain, could do surprising things, even ones generally thought to be confined to that organ: memorizing, learning, recognizing patterns and more. “It’s an illustration of what, in our field, we call emergence: The whole is greater than the sum of its parts,” said Marc Mézard, professor at Bocconi University in Milan, by way of summary. Physicists had already demonstrated this power in their field. A simple network of needles placed head up or down, side by side on a checkerboard, can represent the properties of a magnetic material. Physicist Giorgio Parisi, Nobel laureate in 2021, an expert in statistical mechanics, the science that explains macroscopic phenomena based on microscopic behaviors, developed this theory for more complex materials.
Hopfield, who trained in statistical and solid-state physics at Cornell University and Bell Laboratories, but also pursued biology and neuroscience, built, in 1982, another type of network, where neurons were connected in pairs, active and inactive, with interactions that reinforced each other. He studied the evolution over time of this network, where, at each stage, neurons changed state according to the links with their neighbors. He discovered that, in the end, there were several stable configurations. He realized that this property could transform the network into a memory: The stable configurations would be the elements to remember. He then found a way to select the initial interactions that give rise to these configurations. He subsequently tested the “solidity” of his system: Even when disturbed, the network “corrected” errors and located the memorized configuration.
You have 48.57% of this article left to read. The rest is for subscribers only.