In my last post, I demonstrated how to simulate and plot a simple leaky integrate-and-fire (LIF) neuron using Python. The LIF neuron provides a simple representation of a spiking neuron, but lacks biological plausibility especially when it comes to the actual spike generation. A neural model that does have a solid foundation in physiology is that originally proposed by Alan Hodgkin and Andrew Huxley in 1952.
The Hodgkin-Huxley model for neural dynamics is one of the most successful models in computational neuroscience. Based on voltage-clamp experiments on the squid giant axon, the model incorporates voltage-sensitive ion channels into the circuit model of the membrane to describe the generation and propagation of action potentials. The general model equation is given as
where the g's are the relative conductances and E's are the reversal potentials for the sodium, potassium, and leak channels, respectively. Furthermore, m, h, and n are all sensitive to the membrane potential. For instance, the steady-state values of each over the range [-50,150] mV can be seen below.
The two sodium (m, h) and one potassium (n) variables effect the relative conductances of their respective ion channels. As the channels open or close, the ionic currents contributed by each can either depolarize or hyperpolarize the membrane potential of the cell. This will give rise to the action potential phenomena given sufficient input current.
Here a 25 msec, 10 uA input pulse (green) is presented to the neuron resulting in the generation of action potentials with noticeable refractory periods (blue).
Now for the code. This one is a bit longer than the LIF code, and some lines require scrolling. It is also worth noting that the gating variable equations and parameter values used in this script are from the original Hodgkin and Huxley paper (Hodgkin and Huxley, 1952). They defined the membrane potential at rest to be 0 mV as opposed to the Nernst potential value, such as -70 mV, commonly used in modern examples.
from __future__ import division
from numpy import *
from pylab import *
# K channel
alpha_n = vectorize(lambda v: 0.01*(-v + 10)/(exp((-v + 10)/10) - 1) if v != 10 else 0.1)
beta_n = lambda v: 0.125*exp(-v/80)
n_inf = lambda v: alpha_n(v)/(alpha_n(v) + beta_n(v))
# Na channel (activating)
alpha_m = vectorize(lambda v: 0.1*(-v + 25)/(exp((-v + 25)/10) - 1) if v != 25 else 1)
beta_m = lambda v: 4*exp(-v/18)
m_inf = lambda v: alpha_m(v)/(alpha_m(v) + beta_m(v))
# Na channel (inactivating)
alpha_h = lambda v: 0.07*exp(-v/20)
beta_h = lambda v: 1/(exp((-v + 30)/10) + 1)
h_inf = lambda v: alpha_h(v)/(alpha_h(v) + beta_h(v))
### Channel Activity ###
v = arange(-50,151) # mV
plot(v, m_inf(v), v, h_inf(v), v, n_inf(v))
title('Steady state values of ion channel gating variables')
## setup parameters and state variables
T = 55 # ms
dt = 0.025 # ms
time = arange(0,T+dt,dt)
## HH Parameters
V_rest = 0 # mV
Cm = 1 # uF/cm2
gbar_Na = 120 # mS/cm2
gbar_K = 36 # mS/cm2
gbar_l = 0.3 # mS/cm2
E_Na = 115 # mV
E_K = -12 # mV
E_l = 10.613 # mV
Vm = zeros(len(time)) # mV
Vm = V_rest
m = m_inf(V_rest)
h = h_inf(V_rest)
n = n_inf(V_rest)
I = zeros(len(time))
for i, t in enumerate(time):
if 5 <= t <= 30: I[i] = 10 # uA/cm2
## Simulate Model
for i in range(1,len(time)):
g_Na = gbar_Na*(m**3)*h
g_K = gbar_K*(n**4)
g_l = gbar_l
m += (alpha_m(Vm[i-1])*(1 - m) - beta_m(Vm[i-1])*m) * dt
h += (alpha_h(Vm[i-1])*(1 - h) - beta_h(Vm[i-1])*h) * dt
n += (alpha_n(Vm[i-1])*(1 - n) - beta_n(Vm[i-1])*n) * dt
Vm[i] = Vm[i-1] + (I[i-1] - g_Na*(Vm[i-1] - E_Na) - g_K*(Vm[i-1] - E_K) - g_l*(Vm[i-1] - E_l)) / Cm * dt
## plot membrane potential trace
plot(time, Vm, time, -30+I)
ylabel('Membrane Potential (mV)')
As before, my goal is to provide fairly self-explanatory code with special emphasis placed on any tricks or features I may have employed.
Division from the future
There are fundamental differences between Python 2 and Python 3 that can make code that works in one version fail to execute in the other. Sometimes, however, the default behavior of Python 3 is desirable in Python 2. For these cases, the designers of Python provide the __future__ module. A common usage is to have the more intuitive version of integer division that Python 3 (and MATLAB) uses as opposed to the C-style truncation approach found in Python 2. Simply put, where 3/4 ==> 0.75 in Python 3, 3/4 ==> 0 in Python 2. While this only applies to two integers (if either or both is a floating point number, it behaves as expected), it can have catastrophic effects on any numerical evaluations that may rely on integer division. Adding from __future__ import division to a Python 2 script will ensure truncation errors from division don't drive you crazy wondering why your simulation isn't working.
lambda and vectorize
A lambda is an anonymous function. In Python (as in MATLAB), lambdas provide a compact way to define one-line methods or throw-away functions (e.g. a comparison function for sorting a sequence). When passing a sequence, like a NumPy array, into a lambda, the function will automatically be applied to all elements in the sequence.
A drawback of using lambda functions is that when presenting them with a sequence, all elements are evaluated as a whole, so flow control isn't easy to manage if you need to handle special cases. This happens in the case of the alpha_n (line 7) and alpha_m (line 12) functions, where it is possible for the functions to evaluate to 0/0. The potential discontinuities are resolved using L'Hopital's rule, but now those pre-calculated numbers need to be used instead of attempting to evaluate the actual equations. NumPy provides a solution to this called vectorize(pyfunc), which applies the given function pyfunc to every element of the provided array individually. This is similar to the map function found in Perl or arrayfun function in MATLAB.
A nice feature of Python is the ability to chain comparison operators together. This means that if I want to check if a value is between two numbers, I can simply write the boolean expression a <= value <= b and Python will evaluate the expression to True if and only if value falls in [a,b]. While a minor point, this feels more natural than the way most languages require (e.g. a <= value && value <= b, in C-style languages). I use it here on line 54 when constructing the input stimulus.
Here I presented an implementation of the Hodgkin-Huxley model neuron as described in their seminal 1952 paper. While grounded in observed experimental data unlike the LIF neuron, several more equations and parameters are required to achieve this level of accuracy, resulting in a substantial increase in complexity and computational cost. Several models have been presented since that attempt to maintain the expressive power of Hodgkin-Huxley while reducing the computational and complexity burden, such as FitzHugh-Nagumo and Izhikevich's Quadratic Integrate-and-Fire, the latter of which is going to be the topic of my next post.