First the facts: SyNAPSE is a project supported by the Defense Advanced Research Projects Agency (DARPA). DARPA has awarded funds to three prime contractors: HP, HRL and IBM. The Department of Cognitive and Neural Systems at Boston University, from which the Neurdons hail, is a subcontractor to both HP and HRL. The project launched in early 2009 and will wrap up in 2016 or when the prime contractors stop making significant progress, whichever comes first. 'SyNAPSE' is a backronym and stands for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The stated purpose is to "investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels."

SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, traditional algorithms perform poorly in the complex, real-world environments that biological agents thrive. Biological computation, in contrast, is highly distributed and deeply data-intensive. Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms. SyNAPSE seeks both to advance the state-of-the-art in biological algorithms and to develop a new generation of nanotechnology necessary for the efficient implementation of those algorithms.

Looking at biological algorithms as a field, very little in the way of consensus has emerged. Practitioners still disagree on many fundamental aspects. At least one relevant fact is clear, however. Biology makes no distinction between memory and computation. Virtually every synapse of every neuron simultaneously stores information and uses this information to compute. Standard computers, in contrast, separate memory and processing into two nice, neat boxes. Biological computation assumes these boxes are the same thing. Understanding why this assumption is such a problem requires stepping back to the core design principles of digital computers.

The vast majority of current-generation computing devices are based on the Von Neumann architecture. This core architecture is wonderfully generic and multi-purpose, attributes which enabled the information age. Von Neumann architecture comes with a deep, fundamental limit, however. A Von Neumann processor can execute an arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but the instructions and data must flow over a limited capacity bus connecting the processor and main memory. Thus, the processor cannot execute a program faster than it can fetch instructions and data from memory. This limit is know as the "Von Neumann bottleneck."

In the last thirty years, the semiconductor industry has been very successful at avoiding this bottleneck by exponentially increasing clock speed and transistor density, as well as by adding clever features like cache memory, branch prediction, out-of-order execution and multi-core architecture. The exponential increase in clock speed allowed chips to grow exponentially faster without addressing the Von Neumann bottleneck at all. From the user perspective, it doesn't matter if data is flowing over a limited-capacity bus if that bus is ten times faster than that in a machine two years old. As anyone who has purchased a computer in the last few years can attest, though, this exponential growth has already stopped. Beyond a clock speed of a few gigahertz, processors dissipate too much power to use economically.

Cache memory, branch prediction and out-of-order execution more directly mitigate the Von Neumann bottleneck by holding frequently-accessed or soon-to-be-needed data and instructions as close to the processor as possible. The exponential growth in transistor density (colloquially known as Moore's Law) allowed processor designers to convert extra transistors directly into better performance by building bigger caches and more intelligent branch predictors or re-ordering engines. A look at the processor die for the Core i7 or the block diagram of the Nehalem microarchitecture on which Core i7 is based reveal the extent to which this is done in modern processors.

Multi-core and massively multi-core architectures are harder to place, but still fit within the same general theme. Extra transistors are traded for higher performance. Rather than relying on automatic mechanisms alone, though, multi-core chips give programmers much more direct control of the hardware. This works beautifully for many classes of algorithms, but not all, and certainly not for data-intensive bus-limited ones.

Unfortunately, the exponential transistor density growth curve cannot continue forever without hitting basic physical limits. At this point, Von Neumann processors will cease to grow appreciably faster and users won't need to keep upgrading their computers every couple years to stave off obsolence. Semiconductor giants will be left with only two basic options: find new high-growth markets or build new technology. If they fail at both of these, the semiconductor industry will cease to exist in its present, rapidly-evolving form and migrate towards commoditization. Incidentally, the American economy tends to excel at innovation-heavy industries and lag other nations in commodity industries. A new generation of microprocessor technology means preserving American leadership of a major industry. Enter DARPA and SyNAPSE.

Given the history and socioeconomics, the "Background and Description" section from the SyNAPSE Broad Agency Announcement is much easier to unpack:

Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.

The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications—but useful and practical implementations do not yet exist.

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require. Precisely the kinds of algorithms that suffer immensely at the hands of the Von Neumann bottleneck.

6 Responses to About SyNAPSE

  1. zeev kirsh says:

    it seems like bu, stanford, and a handful of other labs will all be the ‘common’ subcontractors to hp, hrl and ibm. perhaps darpa thought better to bestow 3 awards and distribute the risks of having too many scientists get tripped up on the same single problem.

    one hopes however, that whatever results are gained by each of groups, can later be incorporated into the final ‘winners’ design.

    Also, the truly exciting result here is that neuroscience, in its enduring and very long branching path, may be unfurling another important leaf on its branch of computation.
    the synapse program, whether it succeeds in its official goals or not, will be hugely successful in that it will bring together top talent from various fields to come together and work out a concept that eventually will bear (somewhat unpredictable) fruit in one form another.

    good luck.

  2. Dear Zeev,

    thanks for your encouragements! We really need it 😉


  3. Blaise Mouttet says:

    I’m curious as to whether any of the attempts of IBM, Hewlett Packard, or HRL are similar to that of Bernard Widrow’s ADALINE circuitry which was based on circuit elements called “memistors”. There was a company called the Memistor Corporation which commercialized some neural circuit architectures based on memistors in the 1960’s but this attempt could not compete with the miniaturization available from the development of integrated circuitry occurring around the same time. I have an online article I am working on discussing Widrow’s memistor and the similarity to the “memristors” which have recently been popularized by HP available at http://knol.google.com/k/anonymous/memistors-memristors-and-the-rise-of/23zgknsxnlchu/7#

  4. Paul Adams says:

    The major problem with hardware implementation of formal neural networks is likely to be the same major problem that afflicts the brain, which is that reading out synapses and writing to synapses are to some extent not completely compatible, when the synapses are at very high density. However, this problem can in principle be overcome, by both direct and indirect strategies.
    A rather similar read/write dilemma also underlies the other major form of biological computation: evolution mediated by DNA replication. Here the indirect strategy involves a “proofreading” step, and perhaps a similar process underpins the brain ad would be necessary for implementing high-dimensional neuromorphic computing. However, unless one can “grow” physically new connections, such an approach might be impossible.

  5. Dr. Ronald J. Swallow says:


    I do not understand why the neo-cortex is a mystery to everyone. Its neuron net circuit is repeated throughout the cortex. It consists of excitatory and inhibitory neurons whose functions, each, have been known for decades. The neuron net circuit is repeated over layers whose axonal outputs feed on as inputs to other layers. The neurons of each layer, each receive axonal inputs from one or more sending layers and all that they can do is correlate the axonal input stimulus pattern with their axonal connection pattern from those inputs and produce an output frequency related to the resultant psps. Axonal growth toward a neuron is definitely the mechanism for permanent memory formation and it is just what is needed to implement conditioned reflex learning. This axonal growth must be under the control of the glial cells and must be a function of the signals surrounding the neurons.

    The cortex is known to be able to do pattern recognition and the correlation between an axonal input stimulus and an axonal input connection pattern is just what is needed to do pattern recognition. However, pattern recognition needs normalized correlations and a means to compare these correlations so that the largest correlation is recognized by the neurons. Without normalization, the psps relative values would not be bounded properly and could not be used to determine the best pattern match. In order to get psps to be compared so that the maximum psp neuron would fire, the inhibitory neuron is needed. By having a group of excitatory neurons feed an inhibitory neuron that feeds back inhibitory axonal signals to those excitatory neurons, one is able to have the psps of the excitatory neurons compared, with the neuron with the largest psps firing before the other do as the inhibitory signal decays after each excitatory stimulus, thus inhibiting the other excitatory neurons with the smaller psps. This inhibitory neuron is needed in order to achieve psp comparisons, no question about it. For a meaningful comparison, the psps must be normalized. As unlikely as it may seem possible, it comes out that the inhibitory connections growing by the same rules as excitatory connections grow to a value which accomplishes the normalization. That is, as the excitatory axon pattern grows via conditioned reflex rules, the inhibitory axon to each excitatory neuron grows to a value equal to the square root of the sum of the squares of the excitatory connections. This can be shown by a mathematical analysis of a group of mutually inhibiting neurons under conditioned reflex learning. This normalization does not require the neurons to behave different from as known for decades, but rather requires that they interact with an inhibitory neuron as described.

    Thus, by simply having the inhibitory neurons receive from neighboring excitatory neuron with large connection strengths where if the excitatory neuron fires, the inhibitory neuron fires and by allowing the inhibitory axonal signals be included with the excitatory axonal input signals to the inputs to those excitatory neurons, the neo-cortex is able to do normalized conditioned reflex pattern recognition as its basic function.

    If one thinks about it, layers of mutually inhibiting groups of neurons are all that are needed to explain the neo-cortex functions. The layers of neurons are able to exhibit conditioned reflex behavior between sub-patterns, generating new learned behaviors as observed by the human. With layer to layer feedback, multi-stable behavior of layers of neurons results, forming short term memory patterns that become part of the stimulus to other neurons. With normalized correlations, there is always an axonal input stimulus pattern that will excite every excitatory neuron.

    The only way to prove this cortex model is to build a simulator, modeling large nets of neurons and observing human behaviors. Most certainly we will never be able to measure the neuron nets of the cortex due to their small sizes. This means, that projects must be formed that do these simulations and do not waste R&D efforts to try to measure properties of the cortex as the main means to understand the cortex. Certainly the area to area connection scheme is needed, but it likely can be varied, still with intelligence being exhibited. Trials will be needed to determine the initial connection strengths when initiating the simulator. These connections will need to be simple such as non-zero between corresponding neurons of the mutually inhibiting groups.

    Axon growth toward pulsing neurons is the likely mechanism for memory alteration. Having neuron axons back away from neurons has no physical basis and it is well known that the number of axons increases with age in the human. Certainly axon connection strengths never become proportional to axon pulsing frequencies, otherwise the nets of neurons will never exhibit permanent past memories, but rather be a function of recent events, only. Glial cells are likely participants to axonal growth control. It is likely that they will inhibit axonal growth physically, unless a chemical falls below a concentration. In particular, this would be when the excitatory stimulus (chemically emitted to a neuron by axons to that neuron) to a cell, falls below a critical level, where the correlation between stimulus and connection pattern falls below a limit. The result of such a rule is that learning would only occur if stimulus patterns are new and don’t match the connection patterns on neurons. The psychological effect would be a curiosity behavior, observed in humans. Also, it would result in old age reduction of ability to learn, also observed in humans.

    Progress in understanding how the brain works has been basically non-existent over the last 40 years due to limits in measurement. Progress requires simulation to work out the missing details. I predict that simulation will dominate the future efforts of researchers.
    Also, I predict that special purpose hardware will dominate the approach. Using conventional computers to simulate nets of neurons in real-time will go out of style very soon.

    Simulation permits an evolution process to arrive upon a successful brain understanding. If a logical conclusion is wrong, simulation will eliminate it. If it is right, simulation will verify it.

    One can gain a factor of 60 or so in simulation costs by simulating a group of “frequency” neurons rather than pulsing neurons. In this case a frequency neuron’s output is an analogue level rather than a group of pulses and represents the effect of 60 pulsing neurons whose combined effect on a receiving neuron is a level (equal to the frequency of pulsing by 60 neurons). Obviously, the psp in a receiving neuron would better recognize an input pattern which is not pulsing and acting noisy, but which is based on many pulsing neurons whose statistical effect averages away the pulse effects.

    I have investigated most if not all of the brain institutes and have found essentially no one seeking how the brain works. Rather, they are conduction experiments that have no use in creating a neuron net model that can be simulated. In essence the vast percentage of research money is not contributing significantly to the knowledge that permits the construction of a brain simulator. The closest research project attempting to simulate a full brain is the Blue Brain project in Europe conducted by a consortium and using an IBM machine for simulation. It has been in existence for over 6 years and has not come to any useful conclusions for the structure of the neo-cortex and has not employed the assumption that the neo-cortex consists of mutually inhibiting neurons. Generally, all of its simulations are worthless without the proper structure of the neuron nets of the cortex.
    I have tried to communicate with them and have had no responses. Maybe they have listened to my recommendations, but I have no way to know. What I do know is that they are wasting a lot of money simulating with a computer, rather than special purpose hardware.

    Now it may be that I appear crazy in my assumptions for my neo-cortex model. At least someone could inform me where I have gone wrong, so that I can fix the assumptions. However, I hear nothing which means that they may not be reading my email or don’t understand it, both which are not very professional.

    Dr. Ronald J. Swallow
    610 704 0914

  6. Dr. Ronald Joseph Swallow says:

    I have spent over 35 years studying how the human cortex should work. I have discovered over 30 years ago, that placing normal excitatory neurons in groups sharing an inhibitory neuron, results in the excitatory neurons synapses remaining normalized over all time, i.e. the peak psp in all the neurons of the groups remain equal as synapses grow by a conditioned reflex rule. I have been designing a hardware simulator to simulate the mutually inhibiting nets of neurons, and am seeking funding to build the simulator. My phone number is 610 751 5274.

    This normalization guarantees the nets will forever funtion without an imbalance in the connection strengths occuring where neurons become biased to over excite or to under excite. The model favors that synapses only grow, but it may be possible otherwise for the normalization to remain present. The only way to verify the value of this normalization is to build a simulator, it being impossible to determine the value of the design otherwise.

    Having succeeded in getting neurons to effectively keep the effective sum of square constant, while both excitatory and inhibitory connections only increase is so unlikely to anticipate in a brain model, yet is so easily obtained by mutually inhibiting groups, makes the likely hood of this process being employeed, is very large.

    I continue to seek partners to pool together the financing and effort to verify the model or its possible variations.


    Ron Swallow

Leave a Reply

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>