This is the second post in a series of responses to comments from our IEEE Spectrum article. You can read the announcement here or the first response here.
In response to "Who will win the artificial brain race?", fellow editor Jeff Markowitz raised a particularly challenging point:
We’ve all assumed that an artificial brain of sorts can be useful for industrial or military applications, e.g. things like spatial navigation and object recognition. Why do we think that? Because humans/rats/mammals are so incredibly good at these things? Must we take for granted that replicating biological intelligence is as good as we can do?
as well as this question:
The why is what concerns me the most. Is this somewhat akin to the international space station? That is, because the engineering problem is so hard and the solution is so cool, people assume that its utility is self-evident. I mean, it’s a freaking SPACE STATION!
Our mission is intelligent machines that are suitable for real-world applications. We want to build an airplane, not a highly accurate reproduction of a bird wing. In this context the utility of neuromorphic computing isn't at all self-evident, but a number of converging trends suggest that biology may have a whole lot to teach us about computation. Ultimately, the driving force is energy.
Intelligence is hard to build on conventional machines for two essential reasons. First, it probably requires a large amount of active memory, and second, moving data around with charged particles (electrons over a wire, for instance) consumes a lot of power. A "large amount of active memory" implies that a high percentage of the data stored in RAM will be need to be touched to advance the computation. Biologically, this principle follows from the fact that every synapse has state. Non-biological machine learning and AI work generally come to the same conclusion, however.
If you need to work with a large percentage of the data in RAM at each step, it follows that a large amount of data will need to cross from the RAM to the processor. This data transfer requires a large amount of power, proportional to the length of the wires required. Thus, if you can shorten the average distance information needs to travel from five centimeters to half a millimeter, you can increase your power efficiency by a factor of a hundred. The power draw is also proportional to the signalling rate, so by dropping the clock from a gigahertz to a megahertz you can gain another factor of 1,000.
Taken together, these two principles predict that a successful intelligent machine must have tightly coupled processing and memory, and that each processing element must be small and relatively slow. Real-world intelligence will require very large numbers of these relatively simple processing elements. Unfortunately, very, very few conventional algorithms can effectively utilize such a high degree of parallelism. This is where insights gleaned from the study of biological systems can play a pivotal role. Brains use more or less the same "hardware" strategy (many slow processing elements, each with state) to minimize the amount of metabolic energy required.
Jumping back to Jeff's question, we know we need a new kind of massively parallel hardware if we ever want to embed substantial intelligence in power-limited platforms like robots. We also know that a whole new kind of algorithms will be needed to fully utilize this new class of hardware. Biology is our best point-of-reference for how these problems might be solved. Animals are certainly good at tasks like spatial navigation and object recognition, but they absolutely excel at operating in real-time under an extremely tight energy budget.
The IEEE article list the following hardware goals of the DARPA SyNAPSE project:
16^6 “neurons”/cm^2
10^10 “synapses”/cm^2
100 mW/cm^2
total power = 1 kW
1. How far have we advanced on these goals?
2. It it likely that all or some of them will be achieved by 2016?
We wouldn’t be working on this project if it didn’t look possible. Progress has been good, but it’s high-risk research and a lot can happen (both good and bad) in six years.
Folks thanks for being so informative and responsive here and for “digging in” to address several salient points in AI research including the (unjustifiably) taboo talk about consciousness. As a biology / social science guy I’d suggest that consciousness is both a very important part of evolution and is very likely a mechanistic product of a long and sloppy evolutionary process. Also that there is little reason to suspect it can’t be created in silicon or other substrates. Put another way – to maintain that one cannot create a conscious artificial intelligence is very close to suggesting that human evolution has components that are not mechanistic – an argument that is weak and scientifically unsupportable. If, as people like Kurzweil suggest, conscious machines will practice recursive self improvement, things should get very interesting very fast.
The key problem in neuromorphic computing (and indeed in real neuroscience) is that at very high densities some degree of “crosstalk” between synapse updates is inevitable, and this can completely destroy the ability to learn from the higher-order statistics in the input data. Very little work is being done in this fundamental area, mainly because researchers don’t want to admit their paradigm is flawed.
Put bluntly, the problem is that learning is very hard, and hardware imperfections can make it impossible. The traditional view that neuromorphic approaches are error-resistant is plainly wrong (no free lunches even in the brain), and to hope otherwise is naive at best.
@Joe Hunkins:
Unfortunately, this is a sloppy assertion that “consciousness is mechanistic.” There’s really no scientific evidence on which to base such an assertion. There’s quite a bit of speculation but no proof whatsoever. Building better models however is a step in the right direction (…which is exactly what neuromorphic systems are: models–better ones–but still models).
Neuromorphic computation, as well as all the (strong) AI program, falls into the homonculous falacy. While the brain is a machine, and produces consciousness, by a way we don’t actually clearly understand, a computation process (massively parallel or not) is not a machine, but an abstract, mathematical process, which manipulates representations (symbols, syntaxe) and which can use a machine to be implemented. Thus, all the intelligence does not reside in the system, but only in the eye of the observer/designer/programmer which interpret the program. As shown by philosopher of mind Jhon Searle, as the neuromorphic computation program is stated (as a massively parallel information processing system), it will never succeed in replicating the power of the brain. Certainly, we could perform a simulation of the brain, but we have not yet the conceptual tools to understand how exactly it works. As presented and understand, the program is wrong from the begining.