Bringing together Biologically Inspired Vision and BMI on Mobile Robots

By Varsha Shankar | August 25, 2012

God and the human brain work in mysterious ways. Imagine if we could harness the power of the brain to move objects around in the world like a telekinetic super power. Better still, imagine a patient with locked-in syndrome plus those super powers and an object recognition system that is designed to look and learn like we do! Enter CogEye, Unlock and the topic of this post.

Here we describe a project at the Neuromorphics Lab that has ventured into this application. The idea is simple. Using the techniques developed in Unlock, a patient with locked-in syndrome wearing an EEG headset, is able to navigate a grid of objects on a screen and make a selection using his/her brain. Given the choice of the selected object, CogEye (an artificial visual system developed by the Neuromorphics Lab) is required to search the image seenĀ by the mobile robotic platform to identify the object.

CogEye is trained on a natural object scene to recognise the following four objects:

An iRobot Create is then set up to move around an arena with these objects placed in it. One of the four objects is selected by the patient wearing the EEG headset. The information about this object selection is then sent across to CogEye.

CogEye does not work like most object recognition algorithms. It autonomously explores its environment by deciding where to look based on saliency parameters and when to move its attention to different locations in the scene. Also, CogEye is written in CogExMachina aka Cog which runs all of its computation in a parallel fashion on multiple GPUs and was created as a collaboration between HP and the Neuromorphics Lab. This makes a seemingly easy task much more complicated. However, CogEye is designed to function in a way similar to the mammalian visual system and in time, to learn to see much like you or I would.

In this phase of the project, we concentrate on establishing firm bases on which further advancements may be built. We have shown that CogEye is indeed capable of handling objects in the real world without involved parameter tuning exercises during training. With an unchanging scene, CogEye is able to recognize objects according to the categories it learned previously (see video below).

When CogEye is presented with the first camera image from the Create, it finds the most salient point to foveate on within a set field of view (set to 200x200 pixels here). CogEye constantly generates yaw and pitch values in pixels (that translate into eye movements in a mammal) and head yaw and head pitch values in pixels (that translate into head movements in a mammal) to indicate which point in the working memory it is currently foveating on. The yaw, pitch, head yaw and head pitch values are translated into real movements of the robot wheels and camera pan and tilt. Hence, new images are acquired with every movement generated by CogEye since the robot moves in accordance with these values. A working memory is built up over time with these constantly streaming camera images and the yaw, pitch, head yaw and head pitch values.

CogEye currently works with a static scene within the boundaries of which it sees. For this project however, we required a dynamically changing scene that is built up with time as the robot (and CogEye) explores its surroundings. We now have the capability of building this dynamic scene (see video below).

In the near future we plan on extending CogEye to classify objects in this dynamically changing scene.

At the early stages of this project, we are excited to see that CogEye (never before tested in the real world) can indeed look around an arena, control the Create and separately see and understand objects in a scene.

We have big plans and very soon hope to see CogEye recognizing objects at the same time as looking around the arena and controlling the Create. We are also working on having the Create move towards the selected object and grasp it.

We envision that this system will function like a faithful psychic pet by fetching an object that the patient thinksĀ about.

One Response to Bringing together Biologically Inspired Vision and BMI on Mobile Robots

  1. zeev says:

    read this….and imagine how little we know about neurons. if you did experiments like this with neurons to see how they respond physically—and electrically. Experiments like this demonstrate to you how much more we have to learn. we know almost nothing about neurons when you really start looking at the big picture. we are just learning now.

    http://phys.org/news/2012-09-focusing-phenotype-genetic-external-feedback.html

Leave a Reply

Your email is never published nor shared. Required fields are marked *

*

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>