New Theory of Intelligence May Redefine AI and Neuroscience

The human brain has been a great source of inspiration to recent technological advancements, especially in the fields of deep learning and artificial intelligence.

Not only that, neuroscience as a discipline depends exclusively on how experts understand intelligence and the brain as a whole. In other words, these techs as they are currently– stand on what we perceived intelligence to be. But you know what, a new theory seems to suggest, with evidence, that scientists have been misconceiving the whole concept about intelligence and how the brain functions.

Why This Concept Raises Eyebrows


It is true that to this day, there has been a great debate and disagreement about what intelligence is exactly among neuroscientists. Adding that to this new concept questions the fate of techs like neuroscience and artificial intelligence, as well as the milestones achieved in these fields.

Talking of deep learning which is a key branch of AI, most of these models are based on what experts call layer processing. A concept inspired fully, by the way, neurons work in a biological brain – (but in this case modeled by artificial neural networks).

Now, according to Numenta’s team way of perception, of how the neocortex operates, what he calls The Thousands of Brains Theory of Intelligence, it means that all the progress around the two disciplines might need a retouch. As in, the theory might disrupt both Neuroscience and AI, completely.

What The Thousand Brains Theory of Intelligence Thinks


In precise, the neocortex is a special region of the human brain that is commissioned to handle higher-order functions, the likes of spatial reasoning, generation of motor commands, conscious thought, language, and sensory perception. As such, the researchers at Numenta believe that every part or region of the neocortex functions independently, and learns complete models of concepts and objects.

Their take is that all over the neocortex there are grid cell-like neurons and that there is a special neuron network (that was never discovered before,) what they call the displacement cell. So these neurons act as a complement to all other grid cells, which also exist throughout the neocortex.

Grid cells, to be particular, are place-modulated to help with the understanding of position. With that in mind, the researchers claim that each cortical column perceives models of objects by first learning input, then combining it with a grid cell-derived location and finally integrating both of those over movements.

The Concept in Real-Life Example

Explaining their concept, the team gives the example of a coffee cup. When our eyes see, and hands touch a coffee mug, the visual and somatosensory observe different parts of the cup simultaneously.

As in, each of the models of the cup is different and is learned using independent subsets of sensory arrays. An understanding that is completely contrary to the commonly held theory, where it was believed that sensory input is always processed in a hierarchy of cortical regions.

The new approach states and explains that the connections are by no means hierarchical by nature. Nonetheless, the non-hierarchical mechanism could be having connections across modalities and between the brain hemispheres. Leaving the non-hierarchical connections as the most reasonable explanation for how the sensory interface occurs with movement.