Hello, everybody!
Can anybody help me with understanding of implementation of this stdp-like rule at brian.
w=Aplus* th[((-deltat-aplus)/tauplus) +1] +bplus* deltat>=0
w=Aminus* th[((deltat-aminus)/tauminus)+1] +bminus* deltat<0
Hello, everybody!
Can anybody help me with understanding of implementation of this stdp-like rule at brian.
w=Aplus* th[((-deltat-aplus)/tauplus) +1] +bplus* deltat>=0
w=Aminus* th[((deltat-aminus)/tauminus)+1] +bminus* deltat<0
Hi @serge25 Could you give some more information about where these equations come from? They are not in Brian syntax, and I am not quite clear what e.g. th
means.
Hi @mstimberg . This is not from somewhere. It is some kind of reflection about this dependence. Th - hyperbolic tangent.
Even more reason to give details ! I guess that deltat
is the time difference between the pre- and post-synaptic spike, but I’m having trouble following the rest: there are two different equations for w
, do you mean to apply one for deltat <0
and one for deltat>=0
? The equations are given as w = ...
, but I guess you rather mean something like w += ...
to update the weight?
All that said, the following should be helpful: if the pre- and post-synaptic neurons use refractoriness, then they automatically provide a lastspike
variable that you can use. If not, you can simply add lastspike : second
to their equations, and lastspike = t
to their reset statement. Then you can use something like this in on_pre
:
deltat = lastspike_post - t
and this in on_post
:
deltat = t - lastspike_pre
You then can update your weight as a function of deltat
, the difference between the pre and postsynaptic spike (i.e. negative when post spikes before pre). Note that if you want to add something under a certain condition, rather use an expression like int(deltat>=0)
instead of deltat>=0
to not confuse Brian’s/sympy’s equation parsing. Finally, the hyperbolic tangent function is available as tanh
.
@mstimberg , thank you very much for your answer.