Graphic processing units: beyond video-games

By Massimiliano Versace | October 22, 2010

A recent poster by Los Alamos National Laboratory researchers led by Steven Brumby, titled "Visual Cortex on a Chip: Large-scale, real-time functional models of mammalian visual cortex on a GPGPU", shows another interesting application of graphic processing units (GPUs) to computational neuroscience. What is GPGPU?

The term GPGPU stands for General-Purpose computation on Graphics Processing Units. GPUs, chips whose main technological push comes from huge revenues from the gaming market, are in reality high-performance many-core processors that can be used to accelerate a wide range of applications, going from physics, to chemestry, to computer vision, to neuroscience.

The poster by Brumby et al. is remarkable from being an attempt to implement, in a Neocognitron-type hierarchical model, brain areas of the the ventral pathway of visual processing (V1, V2, V4, and inferotemporal cortex, or IT). This pathway is the one believed to be responsible for object recognition.

Brumby et al. conclude that "GPGPU acceleration may be the key enabling technology for this type of application, as exploiting the GPGPU enables a better than order-of-magnitude speed-up of execution of the model on workstations, enabling learning of better models and faster execution of the final model."

Having played with GPGPU before the introduction of CUDA (harder times, but good ones nevertheless: 10X to 100X speedup on spiking neurons simulations were not unfeasible...), and witnessed the progressive expansion of GPGPU in more and more application domains, it will not be unexpected to see larger brain areas simulated on GPUs in the near future.

And yes, when you will buy your next video game, remember: you are indirectly enabling better computational neuroscience simulations!

About Massimiliano Versace

Massimiliano Versace is co-founder and CEO of Neurala Inc. and founding Director of the Boston University Neuromorphics Lab. He is a pioneer in researching and bringing to market large scale, deep learning neural models that allow robots to interact and learn real-time in complex environments. He has authored approximately forty among journal articles, book chapters, and conference papers, holds several patents, and has been an invited speaker at dozens of academic and business meetings, research and national labs, and companies, including NASA, Los Alamos National Laboratory, Air Force Research Labs, Hewlett-Packard, iRobot, Qualcomm, Ericsson, BAE Systems, Mitsubishi, and Accenture, among others. His work has been featured in over thirty articles, news programs, and documentaries, including IEEE Spectrum, New Scientist, Geek Magazine, CNN, MSNBC and others. Massimiliano is a Fulbright scholar and holds two Ph.Ds: Experimental Psychology, University of Trieste, Italy; Cognitive and Neural Systems, Boston University, USA. He obtained his BS from University of Trieste, Italy.

2 Responses to Graphic processing units: beyond video-games

  1. Jeff Markowitz says:

    I’m feeling devilish today, so let me be the Dark One’s advocate. This is an impressive programming feat for sure, but why do we bother to call it even a functional model of the visual system? From what I can gather, not being an object recognition expert, it’s effectively a multi-scale filter bank with a few non-linearities. Will this help me figure out what V4 encodes? Do we now understand the visual system better than we did before the simulations were run?

  2. Well…. you are sort of an expert! I do not think we know more about it thanks to that work, I would agree with you. The core of the work is come up with good, scalable implementations.

Leave a Reply

Your email is never published nor shared. Required fields are marked *

*

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>