# Description of problem

I am trying to implement the simple STDP rule with a boundary function between two neurons. The rule is as follows:
\frac{dw_{i,j}}{dt}=pF(w_{max}-w_{i,j})\exp(-\frac{t-t_{i}^{last}}{\tau_{p}})S_{j}(t)-dF(w_{i,j})\exp(-\frac{t-t_{j}^{last}}{\tau_{p}})S_{i}(t)

Here w_{i,j} marks synapse from ith to jth neuron. Function F(x)=\tanh(x/\mu) is a function that should limit the infinite growth of the synaptic weight. Functions S_{j}(t)=\sum_{n}\delta(t-t_{n}^{(j)}) are sums of delta functions at the time moments of the j-th neuron spike.

I had a hard time understanding the meaning of ‘on_pre=’ and ‘on_post=’ statements in the definition of the synapses. Bellow, I write my thoughts and a simple program to implement synapses defined by the earlier equation. Maybe somebody can confirm or correct it.

For the synapse w_{i,j}, the presynaptic neuron is ith, and the postsynaptic neuron is jth. So at the spike moment of ith neuron, i.e. moment of presynaptic neuron spike, w_{i,j} should decrease. In other words, all synapses that go from the spiking neuron should be weakened. Therefore, in the statement ‘on_pre=’, we should write ‘w=w-d..’.

At the spike of the postsynaptic jth neuron, the value of w_{i,j} should be increased. Therefore, in the statement ‘on_post=’, we should write ‘w=w+p..’.

# Minimal code to reproduce problem

from brian2 import *
import numpy as np
%matplotlib widget

start_scope()

tau = 10*ms
g=0.05
taupre = 4*ms
taupost =2*ms
wmax = 1
p = 0.001
d = p

muu=0.02

eqs = '''
dv/dt = (a-v)/tau : 1
a :1
'''

G = NeuronGroup(2, eqs, threshold='v>1', reset='v = 0', method='exact')
G.a=[2,1.8];

M = StateMonitor(G, 'v', record=[0,1])

Mspk = SpikeMonitor(G)

S = Synapses(G, G,
'''
w : 1
dapre/dt = -apre/taupre : 1 (event-driven)
dapost/dt = -apost/taupost : 1 (event-driven)
''',
on_pre='''
v_post +=g*w
apre = p
w -= tanh(w/muu)*apost
''',
on_post='''
apost = d
w += tanh((wmax-w)/muu)*apre
''')
S.connect('i!=j')
S.w[0,1]=0.09
S.w[1,0]=0.9

Mw = StateMonitor(S, 'w', record=True)

run(10000*ms)

figure(2).clear()
step(Mw.t/0.001, Mw[S[0,1]].w.reshape(-1,1))
step(Mw.t/0.001, Mw[S[1,0]].w.reshape(-1,1))
legend(['$w_{01}$','$w_{10}$'])
show()



Hi Irmantas,

I’m not much of an expert on STDP, but there’s an example implementation of STDP in brian2 here: Example: STDP based on
“Competitive Hebbian learning through spike-timing-dependent synaptic plasticity”
and “Cortical development and remapping through spike timing-dependent plasticity”

It does seem to have the same basic structure as your code (especially with regards to the use of on_pre and on_post) ! I’ll leave it to someone more familiar with STDP to go over the details if needed.

To me, this looks like a flawless translation of the equation into Brian syntax (there’s a small difference that you use the same time constants \tau_p for the pre- and postynaptic traces in the equation, but you have two separate constants taupre and taupost in the code – but this might just be a typo).

As a general technique, I’d strongly recommend to test these kind of models in simplified situations, where you can compare the results to your expectations. E.g., you can let the first neuron spike regularly, let the second neuron only emit a single spike at 0ms, and only have connections from the first to the second neuron:

G.a=[10, 0]
G.v=[0, 1.1]
# ...
S.connect(i=0, j=1)
start_w = 0.02
S.w[0,1] = start_w


You’d expect that this synapse will get continuously weaker (the first term of the equation is 0, since there are no postsynaptic spikes), and it should change less and less since the time of the last postsynaptic spike t^{last}_j = 0\mathrm{ms} is further and further in the past. If you run this for a short time (run(20*ms)), plot the components of the equations individually and compare their combination to the recorded change in the weights, you get a perfect match:

Plotting code
plt.rcParams.update({'axes.spines.top': False, 'axes.spines.right': False})
fig, axs = plt.subplots(4, 1, sharex=True)
spikes = Mspk.spike_trains()
axs.plot(spikes/ms, np.zeros(Mspk.count), 'x')
axs.axis('off')
trace = d*np.exp(-spikes/taupost)
axs.plot(spikes/ms, trace, 'o', label=r'$d\exp(\frac{t - t^{last}_j}{\tau_{post}})$')
axs.legend()
spike_indices = np.array(np.round(spikes/defaultclock.dt), dtype=int)
F = np.tanh(Mw.w[spike_indices]/muu)
axs.plot(spikes/ms, F, 'o', label=r'$\tanh(\frac{w_ij}{\mu})$')
axs.legend()
axs.step(np.hstack([0, spikes/ms]),
np.cumsum(np.hstack([start_w, -F*trace])), 'o-', where='post', label=r'expected $w$')
axs.step(Mw.t/ms, Mw.w, label=r'actual $w$')
axs.legend()
plt.show() The same for the facilitation induced by

G.a=[0, 10]  # second neuron spikes continuously
G.v=[1.1, 0]  # first neuron emits a single spike
# ...
start_w = 0.98

Plotting code
fig, axs = plt.subplots(4, 1, sharex=True)
plt.rcParams.update({'axes.spines.top': False, 'axes.spines.right': False})
spikes = Mspk.spike_trains()
axs.plot(spikes/ms, np.zeros(Mspk.count), 'x')
axs.axis('off')
trace = p*np.exp(-spikes/taupre)
axs.plot(spikes/ms, trace, 'o', label=r'$p\exp(\frac{t - t^{last}_i}{\tau_{pre}})$')
axs.legend()
spike_indices = np.array(np.round(spikes/defaultclock.dt), dtype=int)
F = np.tanh((wmax - Mw.w[spike_indices])/muu)
axs.plot(spikes/ms, F, 'o', label=r'$\tanh(\frac{w_{max} - w_ij}{\mu})$')
axs.legend()
axs.step(np.hstack([0, spikes/ms]),
np.cumsum(np.hstack([start_w, F*trace])), 'o-', where='post', label=r'expected $w$')
axs.step(Mw.t/ms, Mw.w, label=r'actual $w$')
axs.legend()
plt.show() 3 Likes

Thank you for the replies and the testing tip. Now the situation is much clearer.

Yes, there is a typo in the first formula. The trace in the second exponent should be different, i.e., \tau_d.

1 Like

Glad to hear that it helped. If you don’t need any further help with this, please mark the question as answered by selection the three dots … under an answer and select “Solution”. Thanks!