Implementing Izhikevich 2006 in Brian2

Hi @rrm . As I mentioned in a comment in another post (Realization the model of network in Brain 2 - #16 by mstimberg), several of the Izhikevich papers from the early 2000s use somewhat sloppy numerics (like the state update rule in which u is updated based on an already updated v). Unfortunately, it seems some of the results can only be reproduced by using the same numerics, which questions their general validity. For example, reducing the simulation time step should give qualitatively the same results, but this does not seem to be the case here. This paper by Pauli et al. analyzes this model in impressive detail, it is well worth a read if you are trying to reproduce it.

Regarding the Brian 1 code: it uses quite a lot of Brian 1 internals that no longer apply to Brian 2, but from a cursory look I’d say that you translated most of it correctly. Due to the update of u with the already updated value of v, you cannot use the standard StateUpdater mechanism, though, unless you use a complicated workaround with two independent NeuronGroups and linked variables. Instead, I’d suggest a more “manual” approach that exactly reproduces the approach in the paper: instead of defining v and u as differential equations, you define them as simple parameters, and update them manually. I.e., in the equations, they are defined as v : volt and u : volt/second. The actual state update then happens with a run_regularly function:

G.run_regularly('''
v += 0.5 * dt * ((0.04 / ms / mV) * v ** 2 + (5 / ms) * v + 140 * mV / ms - u + I)
v += 0.5 * dt * ((0.04 / ms / mV) * v ** 2 + (5 / ms) * v + 140 * mV / ms - u + I)
u += dt * a * (b * v - u)''', when='groups')

I am not 100% sure I correctly understood the use of the derivative of the weight matrix here, but I don’t think your code is doing this correctly. What I think it should do is something like this:

synapsesE = Synapses(Ge, G, '''
             w_e : volt/second  # the weight
             w_e_change : volt/second  # the "derivative
             dapre/dt = -apre/taupre : volt/second (event-driven)
             dapost/dt = -apost/taupost : volt/second (event-driven)
             ''',
             on_pre='''
             I += w_e
             apre += Apre
             w_e_change = clip(w_e_change+apost, 0*mV/ms, wmax)
             ''',
             on_post='''
             apost += Apost
             w_e_change = clip(w_e_change+apre, 0*mV/ms, wmax)
             ''')
# Every s, apply the accumulated changes to the actual weights and decay the "derivative"
synapsesE.run_regularly('''w_e = clip(w_e + 0.01*mV/ms + w_e_change, 0*mV/ms, sm)
                           w_e_change = w_e_change*0.9''',
                           dt=1*second)

The idea is that w_e is used to propagate spikes (added to I), but the STDP acts on w_e_change. Only once every second (not ms as in your code), the changes are applied to the actual weights w_e.
You shouldn’t need a network_operation here.

It’s a minor point, but since version 2.5, there’s actually a simple way to create 100 random connections per neuron. Instead of manually creating matrices, you can do:

synapsesE.connect(j='index for index in sample(N_post, 100)')

Best,
Marcel

4 Likes