Neurocognitive models
George Miller is a well-known psychologist, famous for his talk on the size of the short-term memory span, i.e. the “magical” number seven (Miller, 1995). He remembered that the year 1956 was critical for the cognitive revolution: McCarthy, Minsky, Shannon and Rochester organized a meeting coining the term "artificial intelligence" (Miller, 2003). Many famous researchers were at the workshop, such as Chomsky, Newell and Simon. They realized that “it was becoming clear in several disciplines that the solution to some of their problems depended crucially on solving problems traditionally allocated to other disciplines”, and this elicited the cognitive revolution.
Since then, neurocognitive models were developed, such as the perceptron, for instance, which is still used in AI research (Rosenblatt, 1958). One of my favorite works from the early times of cognitive science presented Grossberg’s (1978) early thoughts on the formal features of neural networks, when aiming to simulate the human brain. For instance, he realized that neurons must have a maximum and minimum firing rate. Moreover, a neuron can have one input only and must have a chance to become activated. During neurocognitive development, however, a nearly unlimited number of input neurons can dock at the respective neuron. As a consequence of these assumptions, the activation change of a neuron must be a nonlinear function of the input. Though the functional form can vary, inventions such as the softmax function paved the way for well-working articificial neural networks.
Though modern-day AI systems such as transformers simulate activation changes in part as a linear function of the input, they circumvented the problem of maximum and minimum activation by all-to-all connections of neurons, thus resulting in a well-defined maximum activation.
Though my students were hard to motivate to understand these theoretical considerations for some time, now everybody can see that the decades of connectionist research were worthwhile – “neural network” models now have a practical value: They not only perform useful tasks, but they also provide us with the chance to become a real natural science, i.e. making well-working predictions of human performance. Feynman, for instance, found that social “science” is not a "real" natural science: “What I cannot create, I do not understand”. Now these days are over and we reached the critical milestone of becoming a real natural science!
When Hinton and Hopfield earned the noble price for physics in 2024 -- that gave me the creeps. No one seemed to care about these pioneers of AI for half a century, but finally they got renowned. This demonstrates that you always should pursue what you think it's worth your work time, even when the world doesn't care.
Wanna join the revolution?
-> Join the CNM!
Author: Markus J. Hofmann
Further readings (in German):
- An introduction into Neurocognitive Models:
- My historical perspective on neurocognitive models (Chapter 1):
References:
- Grossberg, S. (1978). A Theory of Visual Coding : Memory, and Development. In E. Laurens, H. F. Leeuwenberg, & J. M. Buffart (Eds.), Formal Theories of Visual Perception (pp. 7–26).
- Miller, G. A. (1995). The Magical Number Seven, Plus or Minus Two Some Limits on Our Capacity for Processing Information. 101(2), 343–352.
- Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7(3), 141–144..
- Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 19–27.