In response to "Who will win the artificial brain race?", fellow editor Jeff Markowitz raised a particularly challenging point:
We’ve all assumed that an artificial brain of sorts can be useful for industrial or military applications, e.g. things like spatial navigation and object recognition. Why do we think that? Because humans/rats/mammals are so incredibly good at these things? Must we take for granted that replicating biological intelligence is as good as we can do?
as well as this question:
The why is what concerns me the most. Is this somewhat akin to the international space station? That is, because the engineering problem is so hard and the solution is so cool, people assume that its utility is self-evident. I mean, it’s a freaking SPACE STATION!
Our mission is intelligent machines that are suitable for real-world applications. We want to build an airplane, not a highly accurate reproduction of a bird wing. In this context the utility of neuromorphic computing isn't at all self-evident, but a number of converging trends suggest that biology may have a whole lot to teach us about computation. Ultimately, the driving force is energy.