# Implementation of Izhikevich neuron model

The Izhikevich neuron model is a 2d system of ODEs, one for the membrane potential and then a reset variable u. In its definition, both v and u have reset values. From what I have done in the past, and what I could find, there is only one argument which seems to only allow one reset value when initialising neurons (using NeuronGroup). Does anyone know how I can add a reset value for both v and u? Thanks.

Hi Iann.
The short answer is both reset conditions can be expressed within an equation string:

reset_eq = 'v = c; u += d;'
N = NeuronGroup(..., reset = reset_eq)


the long answer is there are some unclear details when it comes to implementing the Izhikevich model, you can see some recent discussion in this thread:

1 Like

Question about the refractory period, does the reset variable u account for the refractory period? Or would you need to include it in you neurongroup function?
Thanks!

in order to simulate a “hard” refractory period, I believe you would do something like:

# I don't think the original paper uses a refractory period
refractory_period = 0.1*ms

# The model
neuron_eqs = Equations('''
dv/dt = (0.04*v**2+5.*v+140- u + g_exc - g_inh + noise * randn())/ms : 1 (unless refractory)
du/dt = a*(b*v-u)/ms                             : 1 (unless refractory)
g_exc :1
g_inh :1
noise :1
a :1
b :1
c :1
d :1
''')

reset_eq = 'v = c; u += d; g_exc=0; g_inh=0'

N = NeuronGroup(n, model=neuron_eqs,
threshold = 'v>=30',
reset = reset_eq,
refractory = refractory_period,
method = 'euler')


but I believe the original izhikevich doesn’t include a hard refractory period, and instead captures a “soft refractory period” with the recovery variable

2 Likes

hey Iann,
I ended up working an a version of the Izhikevich equations with dimensions that may be useful to you here:

(it doesn’t use a hard refractory period)

1 Like

Thanks so much for your help, that last comment is very applicable to what I am working on. As a side question (which I thought you may potentially be able to help with). In one paper by Izhikevich, he compares different models i.e., his, LIF, Hodgkin-Huxley, etc. Within this comparison a measured parameter was an approximate number of FLOPS (floating point operations). Would you have an idea on how to implement such a recording within these sorts of models?

Thanks again

So FLOPS are one measure of the complexity / cost of running a certain bit of code.
One way to estimate the number of FLOPS used is to add up the number of operations it takes to compute each subpart of the code (how many FLOPS for multiplication, how many FLOPS for addition etc.). However, this quickly becomes difficult to track for complex codebases.

It looks like there are some packages for automatically tracking the number of operations required when running a piece of python code. I would start with Counting FLOPS and other CPU counters in Python, but I haven’t tried any of these packages personally.

1 Like

Just as a side remark about the FLOP calculations in Izhikevich’s paper: these are very approximative, e.g. it simply postulates that you simulate a leaky-integrate-and-fire model (and the “Izhikevich model” itself) with a time step of 1ms, but e.g. the Hodgkin-Huxley model with a time step of 0.1ms, so that all FLOP calculations are multiplied with 10. There’s at least one paper (IMHO, also quite flawed in its method, though) claiming that if you control for accuracy the advantage of the Izhikevich model disappears.

3 Likes

Any idea why it would be the case that when I run your code for a ‘chattering’ neuron. I get spikes which do not all spike to the same membrane potential? Here is a picture of what I get when I run that piece of code.

i suspect this is just an issue of either defaultclock.dt or the dt used by the state monitor not quite being small enough.

• I suggest printing defaultclock.dt to see what it was (I think it was 0.1*ms for me)
• and then I suggest making sure your StateMon.t is at a similarly dense resolution, the default if you pass no dt argument to the statemonitor should be to use the simulation timestep
1 Like

Why is the threshold for these models 30mV?

Hi @lann591 . In these kind of models, you shouldn’t think of the threshold parameter as an actual threshold in the sense of a leaky integrate-and-fire neuron. Instead, it is rather a cut-off parameter that defines the maximum of the membrane potential – otherwise it would go to infinity. The quantity that you’d call a threshold in an LIF neuron model, or in an experiment, is rather the “point of no return”, the value of the membrane potential which inevitably triggers a spike due to the positive feedback loop. This value is somewhere around -50mV in the model you used for the plot.
In the Izhikevich model, the membrane potential is growing extremely fast during a spike. It therefore makes a large leap over the threshold within a single time step (as @adam suggested, this effect gets smaller with smaller simulation time steps). In your plot, the peak of spikes are all seemingly below the threshold. This is because a StateMonitor records the membrane potential at the beginning of a time step, before the membrane potential gets updated, compared to the threshold and then reset. To see the spike go above the threshold, you’d have to change the when argument of the StateMonitor to after_thresholds (see this blog post for more details: Getting the timing right (scheduling 1) | The Brian spiking neural network simulator). To get a rough estimate of the change of the membrane potential at the peak of a spike, you can look at \frac{\mathrm{d}v}{\mathrm{d}t} around 30mV, ignoring the variable u and the applied current for simplicity: \frac{\mathrm{d}v}{\mathrm{d}t} = (C_1 v^2 + C_2 v + C_3)/\mathrm{ms} . With the standard parameters, this gives \frac{\mathrm{d}v}{\mathrm{d}t} = 326\frac{\mathrm{mV}}{\mathrm{ms}}, i.e. even with a time step of 0.1ms you’ll expect jumps on the order of tens of mV during the single time step (it’s not that bad in practice because u pulls v the other way, but it gives an order of magnitude). If you don’t want this for your plots, you could e.g. add a run_regularly('v = clip(v, -inf*mV, 30*mV, when='after_thresholds'). This is what Izhikevich does in his code (quote from Izhikevich (2004):

After the spike reaches its apex ( +30 mV) , the membrane voltage and the recovery variable are reset according to the (3). If v skips over 30, then it first is reset to 30, and then to c so that all spikes have equal magnitudes.

Hope that makes things clearer.

2 Likes

Yeah, thank you for that @mstimberg, very helpful!
Can I also ask about the spiking profile of regular spiking neurons. In the Izhikevich paper detailing these models, it seems like the ISI is smaller when the input current starts compared to after the first few spikes. Why is this, and how would I go about recreating that? Thanks!

The first ISI is smaller, because the adaption current u has not yet settled into its steady state behaviour. In @adam’s plot this is not that clear, because the neuron is not completely at rest when the stimulation starts. If you change the start of the stimulation to 150ms, and simulate everything for 330ms, you get something that closely resembles Izhikevich’s plot. Here I additionally plot the adaptation current u:

1 Like

this is a good point, for several of the examples (including the intrinsically bursting neuron) it required time for the variables to reach steady-state to reproduce the figures.
This can be changed on the line:

I_app_start = 20*ms #150*ms


For the public example, I might compute steady-state values for u, v to initialize with

2 Likes

Ah okay I see, thank you.
When determining the parameters used for different neurons. I found a 2010 paper by Kumar et al. Which details two optimisation problems (one constrained, the other unconstrained). The problems are based on experimental ISI data. What I want to do is use these optimisation problems to derive optimal parameters for the neurons I am trying to model, where the ISI data I want to use is from another model (someone else in my departments spiking neural network for her PhD).
I will link the paper here of the optimisation problems: (PDF) Optimal parameter estimation of the Izhikevich single neuron model using experimental Inter-Spike Interval (ISI) data
Has anyone had any experience solving these problems to find optimal parameters for their models? How effective was this system in finding parameters which gave meaningful results, and would anyone reccomend another techinque or have any other advice?
Thanks!