Description of problem
Hello! I’m trying to use spiking neural network to learn MNIST dataset. I defined a STDP synapse with a variable ‘learned’ that it should be set equal to 0 after learning, so that the weights that encode a specific input remain fixed.
Minimal code to reproduce problem
class Simulation():
def __init__():
self.excpop = NeuronGroup()
self.input_layer = NeuronGroup()
self.Sinput = Synapses(self.input_layer, self.excpop,
'''w:1
dx / dt = -x / tau_p : 1 (event-driven)
dy / dt = -y / tau_m : 1 (event-driven)
plastic : 1 (shared)
W_max : 1 (shared)
n_p : 1 (shared)
learned : 1''',
on_pre='''scx+=w
x += apre
w = clip(w-w*n_m*y*plastic*learned/W_max, 0, W_max)''',
on_post='''y += apost
w = clip(w+(1-w/W_max)*n_p*(x - xtar)*plastic*learned, 0, W_max)''',
name='Sinput')
self.Sinput.learned = '1'
def train():
self.net.run()
#after learning
self.Sinput['w>0.3'].learned = 0.
But it doesn’t work. the variable self.Sinput.learned that are defined in each synapse are always 1 and so the network keeps chaning the weights. I don’t understand why i can’t set it to 0 with that expression.
Is the idea a good way to fix synapses after learning?