Center for Neurocognitive Modeling

Connectionist models

Connectionist models are a type of computational model that have been used for building computationally concrete theories in social and computational sciences since the 50s (e.g. Rosenblatt, 1958). The basic idea is to have units, which are connected to each other, and to copy the formal features of the human brain within such a framework. In early psychology, these often were just theoretical ideas illustrated by 200 units and their connectivity, for instance.

There are two basic types of connectionist models (cf. e.g. Hofmann & Jacobs, 2014): Parallel Distributed Processing models consist of so-called “hidden units”, which are connected. These units do not refer to a specific thing and it can be an arduous task to find out what they actually stand for. These models, however, can learn novel information, which is basically stored in the connections between the units. There is broad agreement that this architecture basically reflects the way, in which the human brain learns novel information – we say it is neurobiologicall plausible (O’Reilly, 1998). That’s why the units in this architecture are often referred to by the term “neuron”. In psychological models, we often called these early models “toy models”: They illustrate the processes going on in the brain, but they are too simple to perform as well as humans in doing a task. Modern machine learning models, however, often also provide a neural architecture. Since performant computers are available, however, upscaled versions of these models are nowadays capable of showing some relatively intelligent behavior, for instance, transformers such as ChatGPT perform well in routine tasks (Vaswani et al., 2017; LeCun et al., 2015).

The other type of connectionist model are so-called localist connectionist models (e.g. Page, 2000). In this type of model, any unit refers to a specific symbol. In the prototype of a localist connectionist model, the interactive activation model, there are units referring to visual features (see Figure 1). Specific features then activate a letter unit, and several letters taken together then activation a visual word unit (McClelland & Rumelhart, 1981). In the Associative Read-Out Model (Fig. 1), we added a semantic layer using  a simple symbolic language model to define associative and semantic connections betwen word units (Hofmann & Jacobs, 2014; Hofmann et al., 2011; Roelke et al., 2018). Recently also decision mechanisms have been added using leaky noisy evidence accumulators (Sokolovic & Hofmann, 2024).

Though localist connectionist models are not used in modern AI systems, they provide the unique opportunity to understand everything the network does. Thus we see such a framework as a chance to obtain fully explainable AI systems. Typically, however, they do not learn. Rather, they reflect the cognitive operations in the human mind rather than the human brain and actually are a simulation of the mind’s phenomenology during cognitive processes. 

Author: Markus Hofmann

 

References

  • Hofmann, M. J., & Jacobs, A. M. (2014). Interactive activation and competition models and semantic context: From behavioral to brain data. Neuroscience and Biobehavioral Reviews, 46,. 85–104.
  • Hofmann, M. J., Kuchinke, L., Biemann, C., Tamm, S., & Jacobs, A. M. (2011). Remembering words in context as predicted by an associative read-out model. Frontiers in Psychology, 2(252), 1-11. 
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444. 
  • McClelland, J. L., &Rumelhart, D. E. (1981). An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings. Psychological Review, 88(5), 375-407.
  • O’Reilly, R. C. (1998). Six principles for biologically based computational models of cortical cognition. Trends in Cognitive Sciences, 2(11), 455–462.
  • Page, M. (2000). Connectionist modelling in psychology: A localist manifesto. Behavioral and Brain Sciences, 23, 443–512.
  • Roelke, A., Franke, N., Biemann, C., Radach, R., Jacobs, A. M., & Hofmann, M. J. (2018). A novel co-occurrence-based approach to predict pure associative and semantic priming. Psychonomic Bulletin and Review, 25(4), 1488–1493.
  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information processing and storage in the brain. Psychological Review, 65(6), 19–27.
  • Sokolovic, L., & Hofmann, M. J. (2024). How to say ‘no’to a false memory: Leaky and noisy evidence accumulation during associative read-out.  Computational Brain & Behavior, 7, 357–377.
  • Vaswani, A., Brain, G., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Proceedings of the 31st Conference on Neural Information Processing Systems (pp. 6000-6010). Long Beach, CA, USA.

Weitere Infos über #UniWuppertal: