After running through the Businessweek article posted by Max, I am equally excited and nervous. Anyone has to be excited over the prospect of a new computing paradigm, though honestly I'm not sure what that looks like yet. These sorts of articles claim that computers will look more like brains, which is all well and good, because brains tend to do dominate the "competition", i.e. computers, at messy things like object recognition and speech recognition. Conversely (and obviously), computers tend to dominate tasks amenable to decomposition into easily formalizable sequential steps, e.g. chess or even eye surgery. So, maybe we know what Deep Blue looks like, but what on Earth would a computer expert in messy things, a messy computer if you'll excuse the phrase, even look like? We all agree that computers stink at these messy things, and if they didn't stink at them it would be a huge boon to, well, humankind. So let's make the computers more like brains so they can do what brains do so well! But how do we make computers, both in terms of hardware and software, more like brains?
The software end of things, especially for a huge problem like object recognition, makes me a bit uneasy.
We computational neuroscientists have devised a whole riot of models of the brain, some of which can help with things like remote sensing. This practice has about thirty years of history (starting with Frank Rosenblatt's Percepton), and somewhat of an agreed-upon approach to things. Let me be clear: these are algorithms, they are not hardware. These are programs that do cool things, but they are not hardware implementations. All models run on computer hardware specialized for things that computers do well. This, to a certain degree, might explain why simpler, more computer-like, i.e. symbolic, algorithms (e.g. support vector machines and decision trees) are popular in industry and among computer scientists. So, as a 0th-order approximation, it seems that more brain-like hardware would favor brain-like software, and this coupling could help to do brain-like things. All fine so far.
Moreover, the discovery of the memristor may help to make more brain-like hardware. Exciting news indeed! But, but, BUT, before we can even hope to do something like object recognition with this unwieldy, messy hardware, we need to sort out the software end of things. A great deal of basic research is still necessary to even figure out how the brain does object recognition, let alone to formalize this in equations and come up with some algorithms to implement them in this brain-like hardware. We can't get brain-like hardware to do brain-like things if we don't know how the brain does it! It feels strange to say this as if it's saying something new, but sometimes computational neuroscience, as a field, gets caught in Sci-Fi dreams and makes Sci-Fi like promises. I don't want to be the party pooper, because this research is insanely promising with the potential for an enormous payoff, I just hope we all perform our due diligence (me included!). For the paranoid, take comfort in the fact that Skynet won't go live for at least another 20 years.