This brief essay, originated by the work on the Neuromorphics Lab in the DARPA SyNAPSE project, describes our early effort in the study of alternative computing schemes that will make use of massive memristive-based devices coupled with low-power CMOS processes to efficiently compute neural activation and learning in novel computing devices. The answer was to couple fuzzy inference with dense memristive memory. This combination can provide extensive power and silicon real estate savings while maintaining a high degree of accuracy in the resulting precision of the computations.

Biologically inspired recurrent neural networks are computationally intensive models that make extensive use of memory and numerical integration methods to calculate neural dynamics and synaptic changes. The recent introduction of architectures integrating nanoscale memristor crossbars with conventional CMOS technology has made possible the design of networks that could leverage the future introduction of massively parallel, dense memristive-based memories to efficiently implement neural computation. Despite the clear advances given by memristors, the implementation of neural dynamics in digital hardware still presents several challenges. In particular, large scale multiplications/additions of neural activations and synaptic weights are largely inefficient in conventional hardware, leading, among other things, to power inefficiencies. In this paper, we describe a methodology based on fuzzy inference to reduce the computational complexity of such networks by replacing multiplication and addition with fuzzy operators. We use fuzzy inference systems (FIS) to evaluate the learning equations of two widely used variants of Hebbian learning laws, pre- and post-synaptic gated decay. We test this approach in a recurrent network that learns a simple dataset, and compare the fuzzy and canonical implementation. We find that the behavior of the network using FIS with min; max is similar to that of networks that employ regular multiplication and addition, while yielding better computational efficiency in terms of number of operations used and compute cycles performed. Using min; max operations we can implement learning more efficiently in memristive hardware, translating into power savings. This work, partially supported by the SyNAPSE program of the Defense Advanced Projects Research Agency and by CELEST, a National Science Foundation Science of Learning Center, was authored by Massimiliano Versace, Anatoli Gorchetchnikov (Neuromorphics Lab) and Robert Kozma.

**I. INTRODUCTION**

Recurrent neural networks are biologically inspired artificial neural network models generally consisting of two main components, cell bodies and synaptic weights. The ratio of synaptic weights to cells is usually very high, hence the simulation of large networks is computationally intensive since it involves a large number of computing elements and operations on these elements. In current electronics a tradeoff is required between speed, power, area, and accuracy of hardware. The recent introduction of memristive memory makes it possible to densely store synaptic weight values for recurrent nets in a combined CMOS/memristor hardware architecture, resulting in memory load reduction. In this paper, we introduce a novel method, based on fuzzy inference, to reduce the computational burden of a class of recurrent networks named recurrent competitive fields (RCFs). A novel algorithmic scheme is presented to more efficiently perform the highly repetitive synaptic learning component in hardware. Memristive hardware holds promise to greatly reduce power requirements of neuromorphic applications by increasing synaptic memory storage capacity and decreasing wiring length between memory storage and computational modules. However, implementing neural dynamics and learning laws in hardware still presents several challenges. Synaptic weight update rules and network dynamics heavily rely on multiplication and addition, the former operation being expensive in terms of power and area usage in hardware. In this paper, we explore an alternative mathematical representation of these computations aimed at improving power efficiency. We focus on recurrent neural networks and Hebbian learning rule variants as described by Snider [1]. Biologically inspired neural networks generally consist of orders of magnitude more synapses than cells. Synaptic weights are usually accessed to perform scalar multiplication with pre-synaptic cell activation at runtime, along with learning and synaptic weight updates. We explore methods based on fuzzy inference systems (FIS) to increase efficiency of implementing Hebbian learning on hardware with respect to using the conventional algebraic operations (+;-). In this paper, an adaptive recurrent network is described in which synaptic weight updating is performed using Takagi-Sugeno type fuzzy inference systems [2]. In Section II, we provide some background on fuzzy systems and the recurrent network employed in this study. In Section III, we describe a methodology to redefine two learning equations by using FIS. In Section IV, we simulate the fuzzy and conventional algebra networks and compare their respective behavior in terms of accuracy and computational efficiency. In Section V, we discuss the results.

**II. BACKGROUND**

In this section, we provide the necessary background concerning memristors, computational complexity of recurrent networks used in this study, as well as fuzzy inference systems employed to augment the efficiency of computations. We focus on a class of widely used adaptive recurrent neural networks termed Recurrent Competitive Fields (RCFs) [3]. RCFs are massively parallel biologically inspired plastic networks, a characteristic that makes them both powerful and at the same time memory and computationally intensive, introducing issues in the efficient hardware implementation of this class of models.

**A. Memristors and the Brain**

The memristor, short for memory-resistor, is the fourth fundamental two-terminal circuit element in addition to the resistor, capacitor, and inductor. Memristors were predicted based on symmetry assumptions by Chua in 1971 [4], and discovered at HP Labs in Palo Alto in 2008 [5], when certain materials yielded non-volatile resistance similar to the one theorized by Chua and Kang [6]. Memristors are characterized by hysterisis loops in their current-voltage behavior, as well as the ability to stably maintain their nonlinear resistance with extremely low decay rates after power is switched off, measurable in hours, days, or even years. This property makes them useful as nonvolatile memories. The memristive property only emerges significantly at the nanoscale, explaining their elusive nature until present days [7]. Based on current paradigms in nanotechnology, memristors are packed into a crossbar architecture [5]. Such memristor crossbars have been successfully integrated with Complementary Metal-Oxide-Semiconductor (CMOS) enabling the close placement of memristive crossbars along CMOS processors [8]. The further miniaturization allowed by memristors bears the promise of contributing to one aspect of the solution to Moore’s law slowdown by allowing close placement of memory and computation, minimizing power dissipation, while at the same time overcoming the von Neumann bottleneck related to the physical separation of the processor and memory. The density of memristors and their compatibility with existing CMOS technology makes them also suited to implement massively parallel neuromorphic architecture [9]. Synapses in the brain and memristive devices have similarities in their behavior, prompting the idea to utilize them in neuromorphic hardware. Certain aspects of brain dynamics can be viewed as massively parallel dynamical systems, where neurons constantly read and modify billions of synapses. Storing and updating synaptic values based on synaptic plasticity rules is one of the most computationally cumbersome operations in biologically inspired neural networks. Memristor crossbars make it possible to efficiently approximate biological synapses by packing memristive based nanoscale crossbar arrays close to a CMOS layer at densities of approximately 1010 memristors per cm2 [10], [11], [12] (see Fig. 1). Such an architecture can be used to efficiently simulate neural networks due to neural networks’ ability to tolerate an underlying ”crummy” hardware, such as memristive based devices characterized by a large number of defective components [13]. A typical implementation of memristive based synapses consists of dynamical nanowire junctions formed by the crossing of two nanowires separated by a metal oxide. Implementing adaptive recurrent neural networks in memristive hardware have been proposed in [13]. Memristor implementations of instar and outstar networks on neuromorphic computing architectures have been recently studied by Snider [1], [13]. A method for stochastically approximating the behavior of instar and outstar learning using memristors

*Fig. 1. (i) Memristor/CMOS hybrid chip architecture with details of memristor crossbar implementing memristor synapses [11]; (ii) hysteris loop in the current-voltage behavior of the memristor [12]. *

and spiking neurons is described, making use of the densely packed memristor crossbars as the memory for synaptic models.

**B. Recurrent Competitive Field**

Neural networks are built from modules of the form dy=dt = f(y(t)) + h(Wx(t)), where vectors x(t) and y(t) represent the cells of two subsequent layers of the network, W is the synaptic weight matrix of edges connecting the cells, and f is a nonlinear function (e.g. sigmoidal) applied to each element of its vector argument. Using enough such modules and sufficiently large matrices, one can approximate any continuous function arbitrarily well. Computation along synapses can be interpreted as scalar multiplication, where the strength of connection between the cells is reflected in the value of the synaptic weight connecting the cells. Onesuch network is the recurrent competitive field (RCF). RCFs are a class of biologically inspired neural networks introduced by Grossberg that learn input/output representations via a modified Hebbian learning law [3]. Variations in RCFs describe networks that include feedback pathways allowing a cell’s output to project back to its input either directly or indirectly through off-center on-surround projections. Typically, an RCF cell receives, in addition to its bottom-up input, a self-excitatory connection as well as inhibitory connections from neighboring cells in the same layer. RCFs are able to compress and store activity in short term memory (STM), a property that depends on the choice of the feedback input function. Grossberg showed how to construct networks with stable nonlinear network dynamics with respect to external stimuli [14], [15]. RCFs are described by a system of coupled differential equations, and are a computationally intensive algorithm with respect to their hardware implementation due to the complexity of the dynamics at the cells and the high ratio of synapses per cell. In this study, the network consists of a two-layer RCF, an input layer F1 and an output, or coding layer F2. The layers are connected by bottom-up, or feedforward plastic connections modified by the instar (Hebbian post-synaptic gated decay) learning law, and by top-down, feedback projections modified by the outstar (Hebbian presynaptic gated decay) learning law. Cells in F1 are denoted as xi and the cells in F2 as yi. The RCFs used for simulation in this paper are governed by the following set of equations:

Here Ii denotes the bottom-up input to the cell, the constants A;B, and C determine the behavior of the network, and _ is the scaling factor to influence top down feedback is in all simulations set to 0:01. We achieve supervised learning by boosting the activation of the appropriate coding cell yi, with dim(y) = n, by using a supervised learning term

where ki are constants set to 1 for coding cells and 0 for all other cells. The feedback function f(_) may be chosen to be linear, slower than linear, faster than linear, or a sigmoid. The notation [ _ ]+ denotes max(0; _ ). In this paper f(yi) is chosen to be the sigmoid function

Sigmoidal feedback functions combine the functionality of all three cases by contrast enhancing small signals, storing intermediate signals with small distortion, and uniformizing very large signals. In addition they add a new emergent property, the quenching threshold, the minimum size of initial activity required to avoid being suppressed to zero. The Outstar learning law (Hebbian pre-synaptic gated decay) is used to compute the weights wji of the top-down synaptic connection between cells xi and yj :

*Fig. 2. Schematic network diagram of a two layer RCF.*

The constants D and E represent the learning rate, and determine the stability-plasticity of the system. Similarly, The instar learning law (Hebbian post-synaptic gated decay) is used to compute the bottom-up synaptic weights wij between cells xi and yj :

The components of variables x; y;wij ;wji are constrained into the range [0; 1] by convention. Training the network consists of presenting an input pattern for t seconds and computing x_i, y_j , w_ij , and w_ ji. In general we use Euler’s method to calculate all variables, while in the fuzzy version of the network w_ ij and w_ ji are calculated by a fuzzy inference system.

**C. Computational Complexity of Operations**

The computation of the synaptic weight matrcies wij and wji at each time step is particularly power-intensive since its size is the product of the number of cells in the two layers dim(x) * dim(y), and as multiplication in digital hardware requires a number of components roughly proportional to the square of the number of bits in the operands. In addition weights can be modified by synaptic plasticity rules governed by differential equations, the solving of which requires iterative numerical methods such as Runge-Kutta or Euler. The operations AND, OR as well as min and max have computational cost proportional to O(n) the number of bits used, while regular multiplication * uses O(n2) of computational resources per bit. Network size is defined to be the energy required to perform a given computation to a given degree of accuracy. For example, given 16 bits of precision the cost E for various operations is:

E(+); E(max); E(min) =~ 16 (7)

E(OR); E(AND) =~16 (8)

E(*) =~ 162 (9)

Basing a network on computationally cheaper operations would provide substantial energy savings, assuming that:

(i) The new methodology produces networks that are not larger (in terms of power consumption); and

(ii) The resulting networks have a similar expressiveness in approximating continuous functions.

A number of candidates to implement such cheaper networks exist, including morphological neural networks [16]. We investigate fuzzy inference systems (FIS) as an alternative approach that provides similar functionality to the regular (+;*) algebra, while potentially being able to save energy by using the less expensive operations +, max, min, together with fuzzy AND and OR.

**D. Fuzzy Inference Systems**

Fuzzy inference is a method to create a map for an I/O system using fuzzy logic, fuzzy membership functions, fuzzy if-then rules and fuzzy operations. Typical fuzzy inference systems demonstrate advantages compared to classical methods in the fields of pattern classification, expert systems and intelligent control [17], [18], [19]. In this paper, we use Takagi-Sugeno type FIS with min and max operations to reduce computational load of numerically integrating learning equations. Using fuzzy operators offers power advantages over regular (+;*) algebra. Fuzzy sets are universal approximators of continuous function and their derivatives to arbitrary accuracy [20], thus FIS are applicable to solve learning equations governing recurrent neural networks [21]. Advantages of FIS include the following:

(i) computationally cheap fuzzy operators, e.g. fuzzy AND, OR.

(ii) robustness with crummy data, hardware, or even missing data.

(iii) error tolerance; operations min and max don’t amplify errors.

Points (ii)-(iii) are significant, as the memristive hardware is a nanoscale device with high manufacturing defect rates. Fuzzy inference systems can be challenging to design, as they require ad hoc assumptions on membership functions and rules. In addition the defuzzification step can be computationally intensive.

**III. FIS METHOD**

We propose a design for a fuzzy inference system to compute the pre- and post-synaptic gated decay learning equations resulting in a potentially more efficient hardware implementation. Iterative numerical methods used to evaluate the differential equations governing learning add to the computational complexity. In our approach, we replace regular multiplication and addition operators with fuzzy operators, and numerical integration with fuzzy controllers to minimize the computational costs of learning. We design a FIS to approximate the theoretical behavior of the learning laws to high accuracy. We consider various fuzzy membership functions and fuzzy rules, in addition to different time steps built into the FIS. Instar and outstar learning laws governing RCFs are framed in terms of ODEs (5, 6). Fuzzy inference systems are universal approximators of smooth functions and their derivatives, hence it is possible to design a FIS that given inputs xi, yi and wij yields output which approximates w_ ij arbitrarily well. To fuzzify synaptic learning, the corresponding ODEs are solved symbolically, and a FIS is designed to approximate the solution surface of the learning equation at time dt, which serves as the built time step of the FIS approximator. This process is called fuzzification of a learning equation, and the resulting FIS can replace numerical methods used to solve the learning equation. In this section, we describe a method to fuzzify the instar and outstar equations governing the RCF.

**A. Computation at Cells**

In typical RCFs there is an order of magnitude of difference in the number of cells and synaptic weights as #Synaptic weights = 2 _ dim(x) _ dim(y). The equations governing the cells in layers F1 and F2 can have complex dynamics and a large number of terms. Nevertheless, due to the high ratio of synapses per cells, the computation of cell dynamics is less critical from a power budget perspective with respect to weight updating. In this paper we therefore focus on the second operation.

**B. Outstar Learning**

In an outstar network, the weights projecting from the active pre-synaptic cell learn the pattern at the post-synaptic sites:

This differential equation is fuzzified and solved by a three input, one output fuzzy controller. Each equation governing outstar learning is discretizable, meaning that weight w_ ji only depends on components xi and yj of F1 and F2, so the length of the input and output vectors x and y do not increase the internal complexity of the FIS. As a result, the number of fuzzy rules and membership functions required for accurate learning do not proliferate due to an increase in the number of cells in layers F1 and F2. If we set the same time step dt in the FIS to the time step dt = 0:05s used for canonical Euler numerical integration for RCFs, the number of times the fuzzy controller is initiated is the same as the number of iterations needed for solving the learning equation. If it is sufficient to have the synaptic dynamics at larger time steps, it is possible to use a larger time step dt for fuzzy learning to converge to the desired weights in fewer iterations than needed for numerical methods. The three input one output fuzzy inference system w_ ji = fuzzy(xi; yj ;wji) evaluates the outstar learning law solution at time dt. All inputs and outputs of the FIS are normalized into the range [0; 1]. The constants E and D determining the learning rate are hard-coded into the FIS to reduce the internal complexity of the FIS by decreasing the number of fuzzy rules and membership function evaluations required.

**C. Instar Learning**

The instar learning equation:

is symmetrical the to outstar learning equation with respect to the two variables xi and yi, so modifying fuzzy inference system to handle instar simply requires switching the two inputs. The same three input, one output FIS is used to implement instar and outstar learning. The two learning rate parameters D and E are hard coded. Computation occurs by changing the order of the input vectors x and y vector inputs, in particular w_ ji = fuzzy(yj ; xi;wij) computes the instar learning law.

**D. FIS Parameters**

The FIS internal parameters are chosen to maximize computational performance. Internal parameters of FIS include the type and number of input fuzzy membership functions for each input variable, the fuzzy AND/OR rules combining the membership functions, and defuzzification scheme. For performance the number of rules and membership functions are minimized. The FIS approximation improves by the inclusion of additional fuzzy membership functions and rules, but this increases the computational cost of the system. Takagi-Sugeno type FIS are used as opposed to Mamdani type for increased computational efficiency in hardware. For Takagi- Sugeno type inference the output membership functions are set to constant, and defuzzification is done through average defuzzification or weighted average defuzzification. The time at which the learning law ODE solution is approximated based on initial conditions is another parameter of the FIS. We denote this parameter dt as is it analogous to the time step in numerical methods, and determines the learning speed of the system by controlling the number of computing cycles (t=dt) required by the fuzzy inference system to converge to the solution at time t. Common numerical methods for solving differential equations approximate the solution linearly in each small time step. The nonlinear FIS can achieve faster convergence by being able to compute with arbitrary dt and fewer compute cycles as compared to numerical methods. To obtain correct network dynamics the time step dt needs to be accounted for when coupled with the unfuzzified equations of cell activity solved by regular numerical methods.

**IV. SIMULATIONS**

In this section, we simulate and compare two recurrent networks in two tasks. The first network makes use of regular (_; +) algebra and Euler’s method to compute the change of synaptic weights with instar and outstar learning laws. The second network performs supervised learning through a fuzzy inference system. In the first task, the RCF learns a dataset consisting of images of alphabetical characters. In the second task we apply the FIS learning method to a larger network and compare the accuracy and the computational complexity of the fuzzy and canonical RCFs.

**A. Summary of FIS Systems**

Three input, one output Takagi-Sugeno type fuzzy inference systems are built with learning rates hard coded (TABLE I). The accuracy of the network is assessed in this case by comparing the rate of convergence of weights in the fuzzy and canonical RCFs to theoretical values, in particular by calculating the difference of synaptic weights at the end of the simulation. The average differences in accuracy in one cycle of the simulation are given in the last column of TABLE I. For optimal performance of the FIS the following design choices were made: (1) internal time step dt, input fuzzy membership function types, fuzzy rules, and output membership functions were varied; (2) piecewise linear or constant output membership functions were selected to boost computational efficiency; (3) average defuzzification was chosen over weighted average defuzzification of output membership functions. FIS with two input membership functions for each input, and constant output membership functions for each fuzzy rule are able to give sufficient approximation of synaptic dynamics for correct classification.

The fuzzy systems described in TABLE I use constant output membership functions. The number of rules is equal to the product of the number of inputs and the number of input membership functions. The FIS are three input systems with two membership functions per input.

*TABLE I. Parameters of some Takagi-Sugeno FIS for performing instar and outstar learning. Accuracy is defined as mean square error of the FIS compared to the theoretical values under one iteration of the FIS. If the time step dt is small, then piecewise linear tent membership functions do well in approximating the solution. Trapezoidal membership functions captured the dynamics with a lesser degree of accuracy. In contrast, nonlinear Gaussian membership functions have a stable error rate
throughout all time scales. *

**B. Character Recognition Network**

In this section we test the behavior of two networks in a simple supervised learning paradigm. We construct two versions of the RCF, in the first the synaptic weights are calculated by the Euler method, while in the second a FIS described in the previous section is used to perform the synaptic weight updating.

1) Methodology: The network consist of 51 cells and 1820 synaptic weights. The input dataset consists of 26 binary 7-by-5 pixel input images of the alphabet characters. Accordingly, we use 35 cells x in F1 corresponding to the 35 pixels of the input image. The layer F2 consists of 26 cells y corresponding to the 26 letters of the alphabet. Each yi neuron learns to code for one letter of the alphabet. The initial values of instar synaptic weights wij and outstar synaptic weights wji are initialized randomly in the range [0; 0:1].We initialize xi randomly on range [0; 0:02], and yj are set to zeros. In order to achieve supervised learning, the activation of the cells yi is boosted by the supervised learning term, defined as vector Supj = f(a1; a2; .... ; a26)jaj = 1 and ai = 0 if i 6= jg. The bottom-up input to the cell is a binary value 0 or 1 corresponding to a black or white pixel, respectively. In the canonical RCF network, Euler’s method is used to solve the instar and outstar learning equations with time step dt = 0:05s. The RCF network employing FIS has the same network structure, and FIS described in TABLE I are selected to perform synaptic weight updating. Input letters were presented for 200 computing cycles with the supervised input to the coding cells, interleaved by 100 computing cycles where no input was provided, in order to reset the activity of the network in between letter presentation. The following three tests are performed to assess network behavior:

(i) Presenting the i-th letter in the alphabet as input to F1, verify that cell yi of F2 trained to code it becomes active and other cells do not, i.e. activation of coding cells are near 1 while activation of other cells are near 0.

(ii) Activating cell yi of layer F2 and observing the activation pattern of the pixels of the i-th letter in cells xj in layer F1 due to the feedback-mediated activation.

(iii) Compare the convergence of the synaptic weights learned by the FIS and canonical RCFs through calculating the absolute error ABS[Wij (canonical) – Wij (fuzzy)]

2) Results: The results of tests (i)-(iii) are shown in Fig. 3. Fig. 3 for the FIS using tent input membership functions, and dt = 0:05 (i) shows the performance of the fuzzy RCF network in a character recognition problem. The left image contains the input image, the center image shows the activation of the F1 cells after the input was presented for 100 computing cycles, and the right image shows the reconstructed input in F1 after activating F2. Fig. 3 (ii) has the difference of the outstar weights of the FIS and regular RCF networks after training for t = 10; 000 cycles. Fig. 3 (iii) shows the difference of the instar weights.

The difference of weights the networks are calculated by formula ABS[Wij (canonical) – Wij (fuzzy)] in both cases.

Every FIS listed in TABLE I produce networks that classify correctly. Classification accuracy is important as we are less interested in the explicit synaptic weight values and more in the overall dynamical behavior of the networks. In summary, RCF networks using FIS are able to learn synaptic weights with small absolute error rates, and classify characters correctly. The error in synaptic weights is larger for fuzzy and instar than outstar, nevertheless the classification of characters is correct.

*Fig. 3. (i) Left: the pixelated input letter; Center: the activation of yi given letter input R after learning; Right: the reconstruction of the letter R after the activation of y18; (ii) The instar weight matrix maximum error for each letter; (iii) The outstar weight matrix maximum error for each letter. *

**C. Larger Scale Network Simulation **

The purpose of this simulation is to test the speed-up of learning achieved by the reduction of compute cycles by using different time steps dt in the FIS, while applying the FIS methodology to a larger network.

1) Methodology: A simplified two layer RCF is used with one cell in F2 and 5122 cells in F1. Synaptic weights are trained to learn a 512 by 512 grayscale photograph, in which 0 and 1 represent a black and white pixels respectively, and intermediate values represent appropriate grayscale values. In this network, there are 219 synaptic weights and 218+1 cells. The Lena image is used for this simulation. The purpose of this second simulation task is to demonstrate applicability of the FIS methodology to larger scale networks requiring significantly more computational resources. The network preserves the RCF dynamics in F1 while setting y1 = 1 in F2 for all time. We show that by using FIS with larger built in dt time steps we are able to decrease the number of compute cycles required for learning. The synaptic weight computations are performed by FIS, while Euler is used to compute cell dynamics. If the built in time step dt at which the FIS operates is set larger than the one used in calculating the cell dynamics, then the two need to be reconciled. A multitimescale integration scheme is introduced to synchronize the learning times at the cells and synapses. Two nested cycles are used, the outer cycle executes the weight learning by

*Fig. 4. Simulating the second task using the Lena photograph as input. The weights produced by the FIS and canonical learning at the end of the simulation at t = 2000 cycles are almost identical.*

the FIS with greater dt, while the inner cycle performs the activation at the cells with the smaller time step used by the numerical method at a rate to synchronize the time duration of learning at the cells and weights. Similar considerations were made for other memristive architectures, when using spike-based approximations of network dynamics in conjunction with Runge-Kutta [1].

2) Results: Using the FIS method it is possible to reduce the number of compute cycles required to perform learning in RCFs. This is due to the fact that the numerical method implements a linear approximation of the solution at each step, whereas FIS uses a nonlinear (or piecewise linear) global approximation of the solution surface.

**D. Results on Computational Complexity**

The computational burden of fuzzy inference and regular synaptic weight updating are compared by determining the number and type of operations needed, and the computational complexity required to implement them. All fuzzy inference systems described in TABLE I perform identical number of fuzzy operations. In TABLE II, the number and computational complexity of operators used under one iteration of the canonical and FIS synaptic weight update method are given. The computational complexity is compared for numerical representations with 4, 8, 16 and 32-bit precision. For the comparison the time step dt is assumed to be the same in the FIS and forward Euler integrator (See Fig. 6), and an identical number of computing cycles are used. The numerical representation in hardware is planned at 16-bit

*Fig. 5. Comparison of regular learning and fuzzy learning weight convergence. Top: canonical solution of outstar learning with dt = 0:05s. Red line represents the input pattern present at the given cycle. Blue indicates the rate of convergence to the given pattern. Bottom: Fuzzy learning with Blue: dt = 0:05s; Magenta: dt = 0:25s; Purple dt = 1s. All have the same learning rate.*

*TABLE II: Number and type of operations required for synaptic weight updating.*

precision. In the case of 16-bit precision there is approximately 51:22% improvement in efficiency, while with 32-bit precision the same number is close to 65:43% when using the FIS method with the same time step. For 8-bit precision the FIS methodology only improves computational efficiency by 23:81%, while using 4-bit precision the FIS increases computational cost by 27:27%. The number of compute cycles required can be reduced by increasing the time step dt to further reduce computational burden.

*Fig. 6. Computational complexity of synaptic learning as a function of number of synaptic weights. Both regular and fuzzy versions use time step dt = 0:05 for integration. Thick graphs represent FIS performance, while thin graphs show forward Euler performance. The dashing corresponds to numerical precision; non-dashed 32-bit; long-short dashed 16-bit; shorter dashes 8-bit; dotted indicates 4 bits of accuracy.*

**E. Simulation Issues**

(i) When simulating fuzzy inference systems on digital hardware, the execution of fuzzy operators is inherently slower compared to the built in multiplication and addition operators, so runtimes of the simulation are not an indicator of performance.

(ii) The learning rates are hard coded into the FIS, so if we would like to change the learning rate we need to update the fuzzy controller.

**V. CONCLUSION**

In this paper it is shown that fuzzy inference systems can be useful in providing an alternative hardware implementation for widely used biologically inspired, plastic neural networks. These networks can be efficiently implemented in the CMOS/memristor hybrid hardware architecture. Using the FIS methodology it is possible to significantly reduce the computational complexity of the proposed memristive hardware starting at 16 bit numerical precision by replacing the regular addition and multiplication operations by fuzzy operators. Future work may focus on generalizing this method to other learning laws of the Hebbian family, and performing comparison of noise tolerance to simulate

crummy nanoscale hardware for more complex tasks.

**REFERENCES**

[1] G.S. Snider “Instar and outstar learning with memristive nanodevices” Nanotechnology, vol. 22, pp. 015201, 2011.

[2] M. Sugeno Industrial Applications of Fuzzy Control., Elsevier Science Pub. Co., 1985.

[3] S. Grossberg “ Contour enhancement, short term memory, and constancies in reverberating neural networks” Studies of Mind and Brain. Chapter 8. Kluwer / Reidel Press, 1982.

[4] L.O. Chua “Memristor-The missing circuit element” IEEE Trans. Circuit Theory, vol. 18, no. 5 , pp. 507-519, 1971.

[5] D.B. Strukov, G.S. Snider, D.R. Stewart and R.S.Williams “The missing memristor found” Nature, 453, pp. 80-83, 2008.

[6] L.O. Chua and S.M. Kang “Memristive Devices and Systems” Proc. IEEE, vol. 64, no. 2 , pp. 209-223, 1976.

[7] L.O. Chua, “Nonlinear Circuit Foundations for Nanodevices, Part I: The Four-Element Torus” Proc. IEEE, vol. 91, no. 11 , pp. 1830-1859, 2003.

[8] Q. Xia et al. “Memristor – CMOS Hybrid Integrated Circuits for Reconfigurable Logic” Nano Letters, vol. 9, No. 10 pp. 3640-3645, 2009.

[9] M. Versace, and B. Chandler “MoNETA: A Mind Made from Memristors” IEEE Spectrum, December 2011.

[10] Snider, G.S., Amerson, R., Carter, D., Abdalla, H., Qureshi, S., Leville, J., Versace, M., Ames, H., Patrick, S., Chandler, B., Gorchetchnikov, A., and Mingolla, E. “Adaptive computation with memristive memory” IEEE Computer, vol. 44(2), 2011.

[11] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, W. Lu “Nanoscale Memristor Device as Synapse in Neuromorphic Systems” Nano Letters, vol. 10, pp. 1297-1301, 2010.

[12] G. Pazienza, R. Kozma “Memristor as an archetype of dynamic data driven systems and applications to sensor networks” Dynamic Data Driven Application Systems, DDDAS 2011, Tsukuba, Japan, June 2-3

2011.

[13] G.S. Snider “Self-organized computation with unreliable, memristive nanodevices” Nanotechnology, vol. 18, no. 36, pp. 365202, 2007.

[14] S. Grossberg “Adaptive Pattern Classification and Universal Recording: I Parallel Development and Coding of Neural Feature Detectors ” Biological Cybernetics, vol. 23, pp. 121-134, 1976.

[15] S. Grossberg “On the development of feature detectors in the visual cortex with applications to learning and reaction-diffusion systems” Biological Cybernetics, vol. 21, pp. 145-159, 1976.

[16] V.G. Kaburlasos and G.X. Ritter Computational Intelligence Based on Lattice Theory, Springer-Verlag, 2010.

[17] J.C. Bezdek, J. Keller, R. Krisnapuram, N. Pal Fuzzy Models and Algorithms for Pattern Recognition and Image Processing, Springer, 2005.

[18] B. Kosko The Fuzzy Future: From Society and Science to Heaven in a Chip, Harmony Books, 1999.

[19] O. Castillo, P. Melin, O.M. Ross, R.S. Cruz, W. Pedrycz, J. Kacprzyk Theoretical Advances and Applications of Fuzzy Logic and Soft Computing, Springer-Verlag, 2007.

[20] V. Kreinovich, H.-T. Nguyen and Y. Yam “Fuzzy Systems Are Universal Approximators for a Smooth Function And Its Derivatives” Int. J. Intelligent Systems, vol. 15, no. 6, pp. 565-574, 2000.

[21] B. Kosko “ Fuzzy Systems are Universal Approximators” IEEE Trans. Computers, vol. 44, no. 11, pp. 1329-1333, 1994.

i think the announcement by ibm last week of the release of their first neuromorphic chip is stealing a bit of your thunder. you need their pr guys on your side.

Hi Zeev, IBM can thunder as much as they want…. the chip they announced will be the subject of a later post.

I predict that the “cognitive computer” could be bought in the same shop where you can find the Deep Blue chess champion (smarter than Kasparov) and Watson (smarter than Jennings).

Dr. Versace:

I am a graduate student interested in the field, but considering a Ph.D. in Educational Psychology (focus on learning technologies). Looking at contemporary research projects such as yours, it doesn’t appear there are many representatives from the field of Educational Psychology. Do you see points of collaboration between the two disciplines? Would future projects benefit from having this particular type of interdisciplinary expertise? Thank you.

Hi Nickolas,

the department where I work, and in particular the Neuromorphics Lab, has a very varied landscape in terms of preparation of the lab members, ranging from psychology, to engineer, math, physics, and biology. What is common is the interest in the multidisciplinary approach, and a shared end goal of being able to simulate aspects of biological intelligence.

So, the answer is yes if you are interested in the field, and willing to learn new things.

Max

thanks max. but i predict you are wrong. it is military stuff, and ibm will only be allowed to sell a dumbed down version of their chips to the general public on the presumption that our military will not allow chinese to reverse engineer the latest goodies.

also check this out. fundamental research on the vesicle synapse dynamics. always interesting for implications in computational theory of the neuron.

http://medicalxpress.com/news/2011-08-nerve-cells-shown-diverse-believed.html

Hi Zeev,

we will see… I am preparing a post with some more details. Neurdons need to know how to tell the truth from smoke…

Stay tuned! And thanks for the link

Max

Dear Max

Very nice work.

I think that would be nice if you include these two references:

http://arxiv.org/abs/1103.1156

http://arxiv.org/abs/1009.0896

Looking forward to read more from you,

Omid

Thanks for the references Omid,

will surely read them!

Max

But the idea of using memristor crossbar structures to create a fuzzy or neuro-fuzzy computing systems is not new and it has been previously addressed by Merrikh-bayat et.al. as mentioned by Omid. Their computing structure has solved some of problems of memristor crossbars that you have mentioned and interestingly they have shown that the learning procedure of their neuro-fuzzy system is completely consistent with Hebbian learning in Neural Networks.

Thanks Affifi, the work presented here was completed in 2010, so we did not even know of the 2011 papers Omid mentioned.

I have been pointed out to this paper as well:

http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.2074v1.pdf