Gabriel Poesia

Brain computation by assemblies of neurons (@ PNAS 2020)

Christos H. Papadimitrioua, Santosh S. Vempalab, Daniel Mitropolskya, Michael Collinsa, Wolfgang Maass

Link

It's still a mystery how the low-level operations carried out by the brain (neural synapses) bring about the higher-level cognition that humans and animals have (e.g. perception, reasoning, language). The models that we have for how neurons work do not yet translate to anything that can perform interesting high-level tasks in simulations. On the other hand, the models that we do have that can perform high-level tasks are not biologically plausible. These include symbolic models (e.g. rational rules, from cognitive science), as well as neural networks, which are completely different from how our neurons actually work and update (e.g., fully connected layers, backprop updates all parameters in each step, etc).

This paper proposes an intermediate: a relatively simple calculus describing how assemblies of neurons work, which is both biologically plausible and at the same time capable of performing small intersting tasks in simulations. Their Assembly Calculus is computationally universal: you can ``compile'' any Turing machine into an assembly of neurons. At the same time, it's capable of learning and they show a simple experiment in training an assembly for language generation.

This model is, of course, still far from being an alternative for neural networks for the purpose of "building programs that work" for real tasks. However, it's still a quite interesting intermediary between what we know from biology and what we observe from human cognition.