Artificial neural networks are computer programs that simulate how a collection of representative neurons (or more accurately: groups of neurons) can interact to perform a computational task. Neural networks have been trained to recognize faces, read handwriting, complete partially-presented patterns and continue temporal sequences–in short, they perform many of the tasks that our brains can perform with ease, that computers, programmed with conventional rules-based routines, cannot.
In the 80’s and 90’s, the Department of Defense funded most academic research on artificial neural networks. I speculate the military hoped for a means to imbue machines with cognition. Perhaps so they could make themselves some Terminators or whatever. These days, interest in artificial neural networks has waned as many cognitive scientists contend artificial neural network models do not adequately capture the more intricate physiological mechanisms underlying cognition. Consequently, many have deemed artificial neural network models unworthy of study.
Few would dispute that neural networks are very crude models of how the brain could perform a cognitive task. It is the physicist in me that finds them fascinating and worthy of study. Like sub-atomic particles that comprise matter, neural networks are more than just the sum of their parts. Their smallest elements, units, are very simple–each takes input from, and sends output to, the other units in the network. The connections between units vary in strength, but each unit simply adds together all of the input it receives (from the other units in the network) and conveys this sum with its output. That’s all. To first order, this is what physiological neurons do as well.
The magic of artificial neural networks lies in the connections between units. A connection dictates the extent to which one unit’s output activity affects another unit. It is how these connection strengths are adjusted that differentiates a network that, say, recognizes faces from one that generates a sine wave when cued. Though these examples perform very different tasks, their underlying functional element, the unit, is identical. Given enough units and the processing power to appropriately adjust the connections between them, artificial neural networks can be trained on a great variety of complex cognitive tasks.
Could neural networks ever embody an artificial intelligence? Probably not. But my materialist philosophy leads me to believe it is the right track.