This post is authored by Jasmin Leveille and Gennady Livitz, two Neuromorphics Lab researchers working on the development of the MoNETA brain. The goal of the MOdular Neural Exploring Traveling Agent (MoNETA; Versace and Chanlder, 2010) project is to develop an animat, or virtual agent, that can intelligently interact and learn to navigate a virtual world making decisions aimed at increasing rewards while avoiding danger. The animat is designed to be modular: a whole brain system, or artificial nervous system including many cortical and subcortical areas found in mammalian brains, is progressively refined with more complex and adaptive modules, and is tested in increasingly more challenging environment. This post discusses the development of a key component of the visual system.
Unsupervised learning of orientation selectivity maps in a realistic virtual environment
A substantial amount of research has been conducted to show unsupervised learning of oriented receptive field maps from exposure to natural images. In typical scenarios, learning occurs over an extended period of presentation of random patches extracted from a natural image. In this work we sought to test whether receptive fields would develop in an animat wandering in a 3D virtual world. Our neural network consists in roughly three stages: retinal cones, LGN cells and V1 cells. The output from the first stage (retinal cones) was produced by filtering of the rgb image received by an animat with filters whose spectral characteristics correspond to L, M, and S cones. This particular arrangement allowed us to look at learning across multiple achromatic (black, white) and chromatic (red, green, blue, yellow) channels. Note that most of the self-organization work to date has been conducted with a grayscale channel only. For LGN cell center-surround filters, we used self-normalizing distance-dependent shunting equations (Grossberg, 1982), rather than the usual difference-of-Gaussians. For chromatic channels we used cells characterized by dual spatial and cone opponency. For learning we relied on the BCM rule – for Bienenstock, Cooper and Munro, the originators of the rule – with center-surround competition within each cortical hypercolumn.
The animat experienced the virtual environment as shown for example in the movies below.
Example receptive fields learned from various hypercolumns are shown below.
Figure 1. Learned receptive fields. For each channel, receptive fields are grouped per spatial location (many receptive fields are developed at each spatial location). a) Black channel. b) White channel. c) Red channel. d) Green channel. e) Blue channel.
Oriented receptive fields develop in all channels within a few hours of simulation (on a workstation equipped with nVidia GTX-295s GPU). Upon visual inspection it would seem that the receptive fields are not as finely tuned for some of the chromatic channels than for the achromatic ones. This said, more extensive measurements of the tuning properties should be conducted to fully substantiate that claim. In any case, this experiment acts as a good stepping stone toward our next modeling experiment in which we are trying to learn to control saccadic eye movements in addition to learning receptive fields.
For more info, please visit this page.
Bienenstock, E.L., Cooper, L. and Munro, P. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience, 2, 32-48.
Grossberg S. (1982). Why do cells compete? Some examples from visual perception. The UMAP Journal, 3, 103-121.