In typical artificial neural networks, neurons adjust according to global calculations of a central processor, but in the brain neurons and synapses self-adjust based on local information. A man-made self-adjusting (distributed) system capable of performing machine-learning problems would have substantial scaling advantages over typical computational neural networks, in power consumption, speed, and robustness to damage. Furthermore, such a system would allow us to study physical learning without the added complexity of biology. Here we unveil the second-generation design of such a system – a transistor-based self-adjusting analog network that trains itself to perform a wide variety of tasks. Here we demonstrate basic features of the system, including the ability to monitor all internal states. This platform is already faster than a simulation of itself, and is thus an exciting platform for the investigation of physical learning.
|