João Ranhel – Universidade Federal de Pernambuco (UFPE) Recife, Brazil.
The idea is pretty simple, though it is remarkable: “neurons represent information and compute as they form cell assemblies”. This notion is quite old, going back to early- and mid-twentieth century. The first evidences probably came from observations of muscles activities, once increasing or decreasing the number of active motor units changes the amount of force produced by a muscle.
In 1949, Donald O. Hebb suggested that co-activation of ‘cell assemblies’ can be responsible for representing concepts. Thus, the concept is old and the neuroscience literature has plenty of examples about neural cell assemblies.
One neuron alone can be thought as a dynamical system, which behaves as an instable and noisy computational unity. When neurons fire in groups such ‘weaknesses’ disappear. In this sense, the ‘sparse coding’ is a well-accepted concept in which neurons ‘codify’ external and internal states of the world by firing in gathered coalitions.
But the question is: how do cell assemblies represent, memorize and compute such information in order to control behaviors?
This is what the Neural Assembly Computing (NAC) approach tries to explain. In a brief overview it is possible to resume the concept as follows:
1. Spikes do not propagate instantaneously along the axons, so there are delays that must be considered in spiking neural networks;
2. The conjunction of propagation delays, synaptic weights, and network interconnections (the topology) makes a single spike to spread and reach many other neurons at different instants;
3. As spikes reach other neurons at different instants with different strengths, sets of neurons naturally fire together. They can fire as synfire chains (synchronously) or as polychronous groups, firing time-locked. Note that the cell assembly is an ephemeral phenomenon.
Based on these principles, previously theorized by Izhikevich and Hoppensteadt in Polychronous Wavefront Computations, the NAC framework proposes that:
4. As such neural coalitions happen they interact with other coalitions and logical functions are performed:
4.1 A single assembly is able to trigger another assembly, or a single assembly can trigger more than one assembly creating parallel processes (this is called branching);
4.2 Sometimes an assembly A or an assembly B can independently trigger a third assembly C. It means that A OR B is able to trigger C (or both). This is equivalent to the logical function OR (we have used the Boolean notation C=A+B, which is read C is caused by spikes from A or from B);
4.3. In other situations, the spikes from an assembly A are not strong enough for triggering C alone, and the same may occur with the spikes from an assembly B. However, when spikes from A AND B occur coincidently they trigger C. It means that this interaction is performing the logical function AND (in Boolean notation C=A.B, which is read C is caused by spikes from A and B simultaneously).
Considering that assemblies stand for something, i.e. they ‘represent’ external or internal objects or states, nervous systems would not let such ephemeral events disappear. Once assemblies represent some (important) information, how could nervous systems let these events to extinguish?
Hence, cell assemblies must interact with other assemblies in order to retain important representations, events, states, etc.
5. Thus, assemblies reverberate with other assemblies creating memory loops. It means that one bit of information can be retained by a chain of assemblies with feedback: A triggers B that triggers C that triggers back the assembly A. We call these reverberating loops Bistable Neural Assemblies (BNA). Note that such loops have not to do with plasticity mechanisms, and it is not necessary to change synaptic weights for instantiating this kind of memory. In thesis, such loop would remain firing indefinitely.
6. Therefore, it becomes necessary to dismantle branches and established BNAs. Note that the role of such inhibitory assemblies is similar to the NOT logical function. Therefore, when an assembly D inhibits a branch or a BNA we say that D is executing the NOT logical function and dismantling the established branch or BNA.
7. It is possible that two assemblies (A AND B) execute an inhibition of a branch or a BNA. It means that singly neither A nor B can inhibit the assembly C, but together they can do that. So they are performing the NAND logical function (an AND associated to a NOT function). On the other hand, an assembly A OR an assembly B may be capable to perform inhibition independently, so they are performing the NOR logical function.
These are the elements necessary to create computers!
The logical gates (AND, OR, NOT, NAND and NOR) associated to the memory, which in digital circuits are performed by flip-flops, are the basic elements used by engineers to construct computers. By using these elements engineers are also able to create Finite State Machines (FSM), the first step on constructing serial machines.
The great advantage in NAC is that it is possible to create a large number of parallel FSMs in the same substratum: the spiking neural network. Such parallel FSMs can interact and this opens a new perspective in creating 'real parallel processing' machines in spiking neural networks.
Moreover, note that the neural assemblies are both ‘the representation’ and the ‘control’ element for the computational flux. In other words, in NAC the groups of firing neurons ‘represent’ things and states, and at the same time they ‘control’ how information are processed. The paper in which these ideas are introduced is:
I've created a blog for publishing correlated works:
In this site there is a short animation (2'30'') showing the NAC fundamental ideas, and the video can also be seen at:
Matlab codes are available from this site, so other researchers can reproduce the experiments. I’ll try to let a short tutorial available for each code published there. The code for fundamentals and FSM are available, although the article explaining the FSM on NAC has not been published yet (“Neural Assembly Executing Finite State Machines”).
The NAC framework is quite recent, but I visualize useful machines being created by using this approach. By October 2012, I have made few tries in order to insert STDP and other neural plasticity mechanisms within this framework. Therefore, the machines I have worked on (so far) are mainly deterministic.
The neural ‘tuning’ (for timing and synaptic weights) is obtained experimentally by generating candidate topologies. For instance, FSMs are ‘designed’ by using both Mealy and Moore’s method, so the candidate topologies come from a well-established knowledge and methodology. Then, the final tuning is reached by realizing small changes on synaptic weights and propagation delays, and by selecting the well-succeeded topologies which match propagation delays and synaptic weights for performing the desired computation.
There are lots of issues to be investigated starting from the NAC framework.