The illustration shows how the brain’s V1 and V2 areas might use information about edges and textures to represent objects. Source: Salk Institute The illustration shows how the brain’s V1 and V2 areas might use information about edges and textures to represent objects. Source: Salk Institute Programming computers to do tasks such as understand and recognize objects or how to drive a car is very technically challenging particularly because scientists don’t fully understand how the human brain does it.

However, researchers at the Salk Institute have now analyzed how neurons in the critical part of the brain, called V2, respond to natural scenes. The breakthrough could provide a better understand of vision processing and how to translate that into artificial intelligence and robotics.

“Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general,” said Tatyana Sharpee, an associate professor in Salk’s Computational Neurobiology Laboratory. “Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially, we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”

Researchers say vision is derived from sets of complex mathematical transformations that are not yet able to be reproduced by a computer. Visual perception begins in the eye with light and dark pixels with signals being sent to the back of the brain to an area called V1, where they are transformed to correspond to edges in visual scenes.

From these transformations we are able to recognize faces, cars and other objects and whether they are moving. How this happens has remained a mystery because neurons that encode objects respond in complicated ways.

The Salk team has developed a statistical method that takes these complex responses and makes them interpretable, which could be used to help decode vision for computer-simulated vision. Researchers used publicly available data showing brain responses of primates watching movies of natural scenes and then applied the statistical technique in order to determine what features in the movie were causing V2 neurons to change responses.

“Interestingly, we found that V2 neurons were responding to combinations of edges,” said Ryan Rowekamp, a postdoctoral research associate at Salk.

The Salk team found that V2 neurons process visual information based on three principles. First, they combine edges that have similar orientations, increasing robustness of perception to small changes in the curves that form object boundaries. Second, if a neuron is activated by an edge of an orientation and position, then the orientation 90 degrees from that will be suppressible at the same location, a term called “cross-orientation suppression.” Lastly, that relevant patterns are repeated in space in ways that can help perceive textured surfaces of trees or water and the boundaries between them, as in impressionist paintings.

Based on this knowledge, Salk researchers created a model called Quadratic Convolutional, which can be applied to other sets of experimental data.

“Models I had worked on before this weren’t entirely compatible with the data, or weren’t cleanly compatible,” Rowekamp said. “So it was really satisfying when the idea of combining edge recognition with sensitivity to texture started to pay off as a tool to analyze and understand complex visual data.”

Researchers believe this method may be able to improve object-recognition algorithms for self-driving cars or other robotic devices.

The full research report can be found in the journal Nature Communications.

To contact the author of this article, email GlobalSpeceditors@globalspec.com