The Izhikevich neuron model is a 2d system of ODEs, one for the membrane potential and then a reset variable u. In its definition, both v and u have reset values. From what I have done in the past, and what I could find, there is only one argument which seems to only allow one reset value when initialising neurons (using NeuronGroup). Does anyone know how I can add a reset value for both v and u? Thanks.

Hi Iann.

The short answer is both reset conditions can be expressed within an equation string:

```
reset_eq = 'v = c; u += d;'
N = NeuronGroup(..., reset = reset_eq)
```

the long answer is there are some unclear details when it comes to implementing the Izhikevich model, you can see some recent discussion in this thread:

Question about the refractory period, does the reset variable u account for the refractory period? Or would you need to include it in you neurongroup function?

Thanks!

in order to simulate a āhardā refractory period, I believe you would do something like:

```
# I don't think the original paper uses a refractory period
refractory_period = 0.1*ms
# The model
neuron_eqs = Equations('''
dv/dt = (0.04*v**2+5.*v+140- u + g_exc - g_inh + noise * randn())/ms : 1 (unless refractory)
du/dt = a*(b*v-u)/ms : 1 (unless refractory)
g_exc :1
g_inh :1
noise :1
a :1
b :1
c :1
d :1
''')
reset_eq = 'v = c; u += d; g_exc=0; g_inh=0'
N = NeuronGroup(n, model=neuron_eqs,
threshold = 'v>=30',
reset = reset_eq,
refractory = refractory_period,
method = 'euler')
```

but I believe the original izhikevich doesnāt include a hard refractory period, and instead captures a āsoft refractory periodā with the recovery variable

hey Iann,

I ended up working an a version of the Izhikevich equations with dimensions that may be useful to you here:

(it doesnāt use a hard refractory period)

Hey Adam,

Thanks so much for your help, that last comment is very applicable to what I am working on. As a side question (which I thought you may potentially be able to help with). In one paper by Izhikevich, he compares different models i.e., his, LIF, Hodgkin-Huxley, etc. Within this comparison a measured parameter was an approximate number of FLOPS (floating point operations). Would you have an idea on how to implement such a recording within these sorts of models?

Thanks again

So FLOPS are one measure of the complexity / cost of running a certain bit of code.

One way to estimate the number of FLOPS used is to add up the number of operations it takes to compute each subpart of the code (how many FLOPS for multiplication, how many FLOPS for addition etc.). However, this quickly becomes difficult to track for complex codebases.

It looks like there are some packages for automatically tracking the number of operations required when running a piece of python code. I would start with Counting FLOPS and other CPU counters in Python, but I havenāt tried any of these packages personally.

Just as a side remark about the FLOP calculations in Izhikevichās paper: these are *very* approximative, e.g. it simply postulates that you simulate a leaky-integrate-and-fire model (and the āIzhikevich modelā itself) with a time step of 1ms, but e.g. the Hodgkin-Huxley model with a time step of 0.1ms, so that all FLOP calculations are multiplied with 10. Thereās at least one paper (IMHO, also quite flawed in its method, though) claiming that if you control for accuracy the advantage of the Izhikevich model disappears.

Hey Adam,

Any idea why it would be the case that when I run your code for a āchatteringā neuron. I get spikes which do not all spike to the same membrane potential? Here is a picture of what I get when I run that piece of code.

i suspect this is just an issue of either `defaultclock.dt`

or the `dt`

used by the state monitor not quite being small enough.

- I suggest printing
`defaultclock.dt`

to see what it was (I think it was`0.1*ms`

for me) - and then I suggest making sure your
`StateMon.t`

is at a similarly dense resolution, the default if you pass no dt argument to the statemonitor should be to use the simulation timestep