Neural Assembly Computing: a brief overview

By Joao | October 18, 2012

João Ranhel – Universidade Federal de Pernambuco (UFPE) Recife, Brazil.

The idea is pretty simple, though it is remarkable: “neurons represent information and compute as they form cell assemblies”. This notion is quite old, going back to early- and mid-twentieth century. The first evidences probably came from observations of muscles activities, once increasing or decreasing the number of active motor units changes the amount of force produced by a muscle.

In 1949, Donald O. Hebb suggested that co-activation of ‘cell assemblies’ can be responsible for representing concepts. Thus, the concept is old and the neuroscience literature has plenty of examples about neural cell assemblies.

One neuron alone can be thought as a dynamical system, which behaves as an instable and noisy computational unity. When neurons fire in groups such ‘weaknesses’ disappear. In this sense, the ‘sparse coding’ is a well-accepted concept in which neurons ‘codify’ external and internal states of the world by firing in gathered coalitions.

But the question is: how do cell assemblies represent, memorize and compute such information in order to control behaviors?

This is  what the Neural Assembly Computing (NAC) approach tries to explain. In a brief overview it is possible to resume the concept as follows:

1. Spikes do not propagate instantaneously along the axons, so there are delays that must be considered in spiking neural networks;

2. The conjunction of propagation delays, synaptic weights, and network interconnections (the topology) makes a single spike to spread and reach many other neurons at different instants;

3. As spikes reach other neurons at different instants with different strengths, sets of neurons naturally fire together. They can fire as synfire chains (synchronously) or as polychronous groups, firing time-locked. Note that the cell assembly is an ephemeral phenomenon.

Based on these principles, previously theorized by Izhikevich and Hoppensteadt in Polychronous Wavefront Computations, the NAC framework proposes that:

4. As such neural coalitions happen they interact with other coalitions and logical functions are performed:

4.1 A single assembly is able to trigger another assembly, or a single assembly can trigger more than one assembly creating parallel processes (this is called branching);

4.2 Sometimes an assembly A or an assembly B can independently trigger a third assembly C. It means that A OR B is able to trigger C (or both). This is equivalent to the logical function OR (we have used the Boolean notation C=A+B, which is read C is caused by spikes from A or from B);

4.3. In other situations, the spikes from an assembly A are not strong enough for triggering C alone, and the same may occur with the spikes from an assembly B. However, when spikes from A AND B occur coincidently they trigger C. It means that this interaction is performing the logical function AND (in Boolean notation C=A.B, which is read C is caused by spikes from A and B simultaneously).

Considering that assemblies stand for something, i.e. they ‘represent’ external or internal objects or states, nervous systems would not let such ephemeral events disappear. Once assemblies represent some (important) information, how could nervous systems let these events to extinguish?

Hence, cell assemblies must interact with other assemblies in order to retain important representations, events, states, etc.

5. Thus, assemblies reverberate with other assemblies creating memory loops. It means that one bit of information can be retained by a chain of assemblies with feedback: A triggers B that triggers C that triggers back the assembly A. We call these reverberating loops Bistable Neural Assemblies (BNA). Note that such loops have not to do with plasticity mechanisms, and it is not necessary to change synaptic weights for instantiating this kind of memory.  In thesis, such loop would remain firing indefinitely.

6. Therefore, it becomes necessary to dismantle branches and established BNAs. Note that the role of such inhibitory assemblies is similar to the NOT logical function. Therefore, when an assembly D inhibits a branch or a BNA we say that D is executing the NOT logical function and dismantling the established branch or BNA.

7. It is possible that two assemblies (A AND B) execute an inhibition of a branch or a BNA. It means that singly neither A nor B can inhibit the assembly C, but together they can do that. So they are performing the NAND logical function (an AND associated to a NOT function). On the other hand, an assembly A OR an assembly B may be capable to perform inhibition independently, so they are performing the NOR logical function.

These are the elements necessary to create computers!

The logical gates (AND, OR, NOT, NAND and NOR) associated to the memory, which in digital circuits are performed by flip-flops, are the basic elements used by engineers to construct computers. By using these elements engineers are also able to create Finite State Machines (FSM), the first step on constructing serial machines.

The great advantage in NAC is that it is possible to create a large number of parallel FSMs in the same substratum: the spiking neural network. Such parallel FSMs can interact and this opens a new perspective in creating 'real parallel processing' machines in spiking neural networks.

Moreover, note that the neural assemblies are both ‘the representation’ and the ‘control’ element for the computational flux. In other words, in NAC the groups of firing neurons ‘represent’ things and states, and at the same time they ‘control’ how information are processed. The paper in which these ideas are introduced is:

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6186825

I've created a blog for publishing correlated works:

http://www.neuralassembly.org/

In this site there is a short animation (2'30'') showing the NAC fundamental ideas, and the video can also be seen at:

Matlab codes are available from this site, so other researchers can reproduce the experiments. I’ll try to let a short tutorial available for each code published there. The code for fundamentals and FSM are available, although the article explaining the FSM on NAC has not been published yet (“Neural Assembly Executing Finite State Machines”).

The NAC framework is quite recent, but I visualize useful machines being created by using this approach. By October 2012, I have made few tries in order to insert STDP and other neural plasticity mechanisms within this framework. Therefore, the machines I have worked on (so far) are mainly deterministic.

The neural ‘tuning’ (for timing and synaptic weights) is obtained experimentally by generating candidate topologies. For instance, FSMs are ‘designed’ by using both Mealy and Moore’s method, so the candidate topologies come from a well-established knowledge and methodology. Then, the final tuning is reached by realizing small changes on synaptic weights and propagation delays, and by selecting the well-succeeded topologies which match propagation delays and synaptic weights for performing the desired computation.

There are lots of issues to be investigated starting from the NAC framework.

12 Responses to Neural Assembly Computing: a brief overview

  1. soma says:

    You should definitely look into the work and publications of Günther Palm and his grad-students. Much of the work you are presenting looks very familiar when having read his work on neural assemblies (which dates back as early as 1981!).

    cheers

  2. Joao says:

    When you say “You should definitely look into…” sounds to me I didn´t do my homework. Yes, there is no scientist capable of reading everything published about a subject, even with Internet and searching mechanisms.
    I do respect Günther Palm’s works. I´ve read a couple of his works. The list of his publications is huge. But I have never read some ‘similar’ or ‘proximal’ to what was described in NAC. You say “looks very familiar”, could you please be specific?
    You will help us a lot by highlighting Palm’s (or his grad-student’s) publications that treat the interactions among assemblies as executing ‘logical functions’, ‘bistable loops’ (acting as flip-flops), or performing ‘finite state machines’ (FSM). These are the fundamentals of NAC.
    I have credited one of the ‘starting points’ for my investigations as the Izhikevich & Hoppensteadt’s (2009) “Polychronous wavefront computation”. The authors visualize any medium that propagates waves (including liquid reservoir) as possible substratum for wavefront computation. I have ‘engineered’ their theory with adaptations.
    engiIf I am in debt with G. Palm and his collaborators I apologize. That’s why I am kindly asking you to point out in which publication(s) I can find the familiarity you have found.
    Thanks and cheers!

  3. soma says:

    I’m sorry if I sounded offensive – that was far from what I wanted to achieve. The point I wanted to show was that maybe you find his work or for example the work of Friedemann Pulvermüller, which is quite similar, interesting and inspiring! I just had the feeling that it sounded familiar to me what you are describing ;)

    As it was some time ago that I read his material, I hope you will give me some time to search through my stacks of printed material and give you some hints. One of his papers which I found online is http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.4220 . Looking into it, it described a little bit of his model about neural assemblies “igniting” each other – means with which you can easily model boolean functions.

    cheers!

  4. Joao says:

    Karen,
    Please, contact the Editors. I think they will give you permission for posting your page.
    att
    J Ranhel

  5. John Harmon says:

    Memories are logically gated also. For example, an apple memory woud be red and yellow and green, not orange nor blue. Maybe memories, when active, are neural firing patterns?

  6. João Ranhel says:

    Certainly, John…
    Our first question on NAC was: how assemblies retain internally (in the brain) ephemera events that happen externally (e.g. the image of an apple) by means of assemblies?
    The answer may be that assemblies reverberate, which means A triggers B that triggers C that triggers A back. These three chains of firing neurons may represent the object we call “apple”. But, how about the color attribute? Then, we get into the “binding problem”… how brain associate things? In my opinion there are other assemblies firing when a color appear for the nervous system. Hierarchical groups of assemblies in ‘superior layers’ are responsible for capturing the details: e.g. “the reverberating assemblies representing the colors red or green are always present when the reverberating assemblies representing the shape apple”.
    The important idea is: each representation within neural systems should not depend on a single neuron, the idea of ‘grandmother cell’ .

    It must be represented by several distributed neurons (coalitions of neurons) instead, the so called ‘sparse code’ .

    It makes the system more robust. If one neuron fails, the overall answer of the system probably is not affected.

  7. John Harmon says:

    Joao, thank you for your response and insights. If I may, I’d like to expand the conversation to a larger view — a 4D mind/brain model based on the idea that memories = patterns of neural firing.

    Briefly, the mind can be defined as a memory set (except for stimulus-driven sensation and perception). With sensory processing, recognizing, identifying, and ascribing meaning to an apple object (or any percept) depends on activation of “apple sight,” “apple taste,” “apples are nutritious,” and other apple memories. Subsequent prediction (“what will happen if I reach for it, grasp it, bite into it…”) is also a memory set. Attention (to an apple object + one’s movement and intentions in relation to it) can be defined as a memory set. The self, and decisions made (“I like apples — I am going to eat that one now”) are also memories. A given intention to move (“reach my right arm forward”) is a definable memory set (visual and somatosensory). Even the motor cortex must run on memories, so that the efferent signals trigger the intended movement.

    Throughout the apple interaction, these memories take turns “lighting up” (becoming strongly activated). This memory subset is the sparce code. For example, within the person’s “apple” memory, the sub-memories “apples are nutritious” and “I like apples” would activate most strongly during the decision to eat it, “shape of apple” while reaching for it, and “apple taste” just prior to biting into it. The sparce code of the apple interaction functional neural network, as it operates through time, is a function of the entire environment/internal state system, but principally the goals most inherently powerful and strongly activated within that situation. Goals are (you guessed it) memories also.

    To summarize: mind = memories (subjective view) = neural firing patterns (neural view). In other words, the contents of the mind = the sum total of (local and global) cell assembly activation. This set changes continually, arguably every moment of one’s life. Another way to say this is that a person’s structural connectome = their memory set, while the functional connectome (the set of functional neural networks created by cell assemblies) = the memory set active during a given period of time.

    If correct, this memory activation (MA) model could be used to reverse engineer both mind and brain: for general and more specific environment/internal state pairings. The key is to accurately define memories (their contents and activity, including interaction), from a subjective viewpoint. Working from this 4D map, the cell assembly activation correlates could then be defined.

    I see a great opportunity to use the MA model to enhance neurocomputation. The NAC approach, or any viable approach, could be fitted to this framework, illuminating it in a beautiful way, while allowing the neurocomputation to shine to its fullest extent.

    I look forward to hearing your thoughts: particularly any flaws you see in the MA model, it’s relationship to neural computation, or to the NAC approach specifically.

  8. Joao says:

    John, you have nice insights and your model seems to be suitable for explaining many of mind/brain functions. My first comment: there are ‘levels’ of explanation when we try to explain certain phenomena. Mind is in the top level of “cognition”, while neural assembly computing is in the level of ‘brain circuitry’. The Holy Grail at this stage of knowledge is to find explanations that connect “the circuitry” with “the cognitive functions”.
    One can start with models and then simulate them on computers, and then analyze the responses and adjust the models… There are lots of models and every one gives us new contributions.
    Instead, I prefer another approach: the embodied cognition. Every part of the ‘body’ knows how to solve some part of a complex behavior. For instance, a couple of central pattern generators (CPGs) in spinal cord are capable of generating the gait for an animal to move a leg, the head, etc. (OK, this can be what you call a memory of “X”).
    So, let us see, you wrote: “mind = memories (subjective view) = neural firing patterns (neural view). In other words, the contents of the mind = the sum total of (local and global) cell assembly activation.
    Well, let us suppose that such memories are, at least in part, CPGs. I have shown that NAC can perform finite state automata (FSA) and complex algorithms. Therefore, a number of FSA interacting each other can generate quite complex behaviors. But I am working in the circuitry level. You gave an example of ‘attention’ and ‘arm movement’… please, have a look to my paper where I give the similar example for NAC (J. Ranhel, “Neural assembly computing,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 23, no. 6, pp. 916–927, 2012).
    Well, I have another suggestion of paper that may give you clues for constructing a bridge between circuitry and minds: Yuste and co-workers have proposed that the cortex may be seen as a special type of CPG, which is based on Hebbian assemblies specialized for learning and storing/retrieving memories, a ‘memory CPG’(see R. Yuste, J. N. MacLean, J. Smith, and A. Lansner, “The cortex as a central pattern generator,” Nature Reviews Neuroscience, vol. 6, no. 6, pp. 477–483, 2005). They took further an idea proposed by Llinás. In their view, neocortex circuits were created from well-known circuits that repeat in spinal cord, brain stem, etc.
    I am interested in basal circuitry that can control legs and head movements (CPGs playing FSA), as well as circuits that receive lights and chemicals and translate it to firing assemblies. Then, I want to join such circuitry for constructing more complex machines (algorithms). This is a bottom-up approach. This is how I think it is possible to construct a kind of ‘embodied cognition’. Later, a ‘central control’ (a mind?) can activate/deactivate such basal circuitry hierarchically. But it is for my future works.
    You are interested in minds and how high-level cognitive functions take place in brains. I’ll be glad to see people interested on using NAC in such high level. I can try to do my best on discussing these subjects, although sometimes I feel that a gap between our approaches is inevitable.

    Joao Ranhel

  9. John Harmon says:

    Raoa, thank you again for your thoughtful comments and insights.

    You’ve sketched a roadmap for linking mind to neural circuits and the NAC approach. It is a challenge, I agree. But I’d like to extend this map further — I hope you find it useful.

    The motor control system as I see it, other than the thalamus, = a set of memories, I.e. CPGs. The major players are sensation/perception (thalamus), movement goals (premotor cortex), efferent signal memories (basal ganglia), and the sender of those signals (motor cortex). For example if a person decides to walk, the prefrontal cortex sends this intention (“I will walk now”) to the premotor cortex, triggering a “walk” movement memory. The latter is a firing pattern corresponding to the visual and somatosensory information comprising this movement — what it looks and feels like to walk. Once active, this memory/CPU triggers a larger premotor–basal ganglia memory (functional neural network), including the motor memory/CPG in the basal ganglia. In other words, a “walk” signal (cortex) matches with, and activates, a corresponding efferent signal (BG). The latter, when relayed through the motor cortex, triggers a matching spinal CPG, creating the movement. A walk command thus activates a system-wide walk memory (BG, spine, thalamus…). Subjectively, one intends, performs, perceives, and attends to walking.

    My thinking is that — like all frequently activated movement memories — the “walk” premotor–basal ganglia CPG incorporates a large set of sub-CPGs. In other words, one’s walk memory incorporates walking in a variety of conditions: on flat or uneven surfaces, wood, carpet, grass, sand, and ice, on an incline, wearing different shoes, faster or slower, etc. Each sub-CPG is comprised of a unique visuo-somatosensory/efferent memory set. These are differentially activated according to current conditions, and one’s goals within. For example, while walking on grass one might perceive (thalamus) ice in front, triggering a “walking on ice” CPG (premotor, BG). Or, one could activate this program absent matching sensory “ice” input (in which case walking on grass is done in an unusual way!).

    Both very general and more specific movement CPGs would each have a large set of CPG parts. For example walking would include “swinging leg forward while balancing on other leg,” “planting foot on the ground,” “shifting weight to the other leg,” etc. Each part is a generalized CPG in its own right, differentially expressed with each 2 stride run. These sparce code differences depend not only upon what walking CPG is being run, but upon moment-by-moment sensory changes and goals triggered. For example, while walking one senses one’s foot slip, triggering a “regain one’s balance” CPG. The overall walk CPG can be overridden at any point in time during its run, via its parts and their connection to other memories.

    This involves basal ganglia excitation, but what about inhibition? As I see it there are 2 main causes of BG inhibition. The first is the set of similar BG CPGs activated during a movement. For example during walking, a number of walking sub-CPGs will be partially activated also. Walking up a slight grassy incline in bare feet is similar to walking with shoes on flat grass, etc. Yet all CPGs activated which do not fully match the goals/environmental conditions need to be inhibited, so that the optimal sparce code can emerge.

    The second cause of BG inhibition as I see it is the sequential nature of CPGs. The set of CPG parts, activated first as a whole (point attractor), then need to be run in a sequence (dynamic attractor). Each part needs to “wait its turn.”

    From this basic model some general principles emerge:
    (1) motor control is goal driven, yet (via sensory–motor memories) responsive to environmental input at the same time.
    (2) if goals are memories, then the entire system (except the environment and thalamus) is memory–driven.
    (3) movement intentions (premotor) and efferent signal memories (BG) are in effect mirror images of one another.
    (4) memory/CPG activation accomplishes a number of things simultaneously: (a) selects the overall movement “program” and its parts (ex: walking, throwing a baseball, etc.), (b) biases/primes the system for future activation, (c) generates predictions of system activity, (d) results in episodic movement memories (sensation/goal/efferent signal combinations stored in the hippocampus) from which generalized movement memories are updated, and errors corrected.

    I am very interested to hear your evaluation of this model, if you could share your thoughts. Does it seem viable? Useful? Are there any flaws that you see?

    If this model makes sense to you, I’d be happy to attempt to link it in a more specific way to your work regarding FSAs, neural computation, etc. Probably the best bet is to get others involved — experts in neural dynamics for example — who can help build bridges between this model and the NAC approach to neurocomputation. I do believe the mind/brain system a, in the final analysis, be reduced to a (admittedly enormous and incredibly complex) set of neural states and state changes. The system I believe can also be modeled mathematically, as a whole. But this requires a clear picture of the overall system and its (state/state change) components.

    In the short term, I see the possibility of using this model, combined with aspects of the NAC approach, to write a motor control computer program. This could be used as artificial brain software for autonomous robots or other AI applications.

    In any event, I am enjoying our discussion, and look forward to hearing your thoughts.

  10. Joao says:

    John, your model sounds fine to me.
    As all complex models, it seems to be hard to implement. I feel that it may be difficult to join all these ‘subsets/components’ in a meaningful system.
    If I would be asked to give an opinion, it would be: just start from what you think are the fundamental blocks, and go ahead…
    J Ranhel

  11. John Harmon says:

    Joao — I’m delighted you saw no flaws in my model. This is good news!

    Disagree that it’s too difficult to create a meaningful system. The brain already does it, after all… It comes down to modeling the mind in the brain, in 4D, which is do-able… But agree the complexity is difficult at small scales ( individual neurons, synapses…)

    Good luck with your work, and with the NAC approach. And thank you for having this discussion with me.

    John

Leave a Reply

Your email is never published nor shared. Required fields are marked *

*

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>