To me, this looks like a flawless translation of the equation into Brian syntax
(there’s a small difference that you use the same time constants \tau_p for the pre- and postynaptic traces in the equation, but you have two separate constants taupre and taupost in the code – but this might just be a typo).
As a general technique, I’d strongly recommend to test these kind of models in simplified situations, where you can compare the results to your expectations. E.g., you can let the first neuron spike regularly, let the second neuron only emit a single spike at 0ms, and only have connections from the first to the second neuron:
G.a=[10, 0]
G.v=[0, 1.1]
# ...
S.connect(i=0, j=1)
start_w = 0.02
S.w[0,1] = start_w
You’d expect that this synapse will get continuously weaker (the first term of the equation is 0, since there are no postsynaptic spikes), and it should change less and less since the time of the last postsynaptic spike t^{last}_j = 0\mathrm{ms} is further and further in the past. If you run this for a short time (run(20*ms)), plot the components of the equations individually and compare their combination to the recorded change in the weights, you get a perfect match:
Plotting code
plt.rcParams.update({'axes.spines.top': False, 'axes.spines.right': False})
fig, axs = plt.subplots(4, 1, sharex=True)
spikes = Mspk.spike_trains()[0]
axs[0].plot(spikes/ms, np.zeros(Mspk.count[0]), 'x')
axs[0].axis('off')
trace = d*np.exp(-spikes/taupost)
axs[1].plot(spikes/ms, trace, 'o', label=r'$d\exp(\frac{t - t^{last}_j}{\tau_{post}})$')
axs[1].legend()
spike_indices = np.array(np.round(spikes/defaultclock.dt), dtype=int)
F = np.tanh(Mw.w[0][spike_indices]/muu)
axs[2].plot(spikes/ms, F, 'o', label=r'$\tanh(\frac{w_ij}{\mu})$')
axs[2].legend()
axs[3].step(np.hstack([0, spikes/ms]),
np.cumsum(np.hstack([start_w, -F*trace])), 'o-', where='post', label=r'expected $w$')
axs[3].step(Mw.t/ms, Mw.w[0], label=r'actual $w$')
axs[3].legend()
plt.show()

The same for the facilitation induced by
G.a=[0, 10] # second neuron spikes continuously
G.v=[1.1, 0] # first neuron emits a single spike
# ...
start_w = 0.98
Plotting code
fig, axs = plt.subplots(4, 1, sharex=True)
plt.rcParams.update({'axes.spines.top': False, 'axes.spines.right': False})
spikes = Mspk.spike_trains()[1]
axs[0].plot(spikes/ms, np.zeros(Mspk.count[1]), 'x')
axs[0].axis('off')
trace = p*np.exp(-spikes/taupre)
axs[1].plot(spikes/ms, trace, 'o', label=r'$p\exp(\frac{t - t^{last}_i}{\tau_{pre}})$')
axs[1].legend()
spike_indices = np.array(np.round(spikes/defaultclock.dt), dtype=int)
F = np.tanh((wmax - Mw.w[0][spike_indices])/muu)
axs[2].plot(spikes/ms, F, 'o', label=r'$\tanh(\frac{w_{max} - w_ij}{\mu})$')
axs[2].legend()
axs[3].step(np.hstack([0, spikes/ms]),
np.cumsum(np.hstack([start_w, F*trace])), 'o-', where='post', label=r'expected $w$')
axs[3].step(Mw.t/ms, Mw.w[0], label=r'actual $w$')
axs[3].legend()
plt.show()
