I’m implementing an unpublished learning rule that needs to “push” some information from a Synapses model, through a NeuronGroup to another Synapses model, when neurons in that group spike. For example, let’s say I have a feedforward network that looks like this:
S1 → NG → S2
The S1 model includes some dynamic variable
S1.x. Anytime a postsynaptic neuron fires, I want to increment a variable
x. So I define the
on_post for S1 to be:
S1 = Synapses(..., model='dx/dt = -x / tau : 1', on_post ='y_post += x')
But when a neuron in NG spikes, I don’t just want to update
NG.y though. I also want to propagate that new value to the downstream synapses S2, storing it in the dynamic variable
S2.z. So I define the
on_pre for S2 to be:
S2 = Synapses(..., model='dz/dt = -z / tau : 1', on_pre='z += y_pre')
The problem is that when a neuron in NG spikes, the updated
NG.y value doesn’t appear in
S2.z until that neuron spikes again.
If I understand the output of
scheduling_summary() correctly, it is clear why this is the case: All
on_pre handlers are run before any
on_post handlers, which means the
order argument cannot be used to process S1’s
on_post before S2’s
Because the rule I’m implementing is latency-based (and relies on single spikes in many cases), this information needs to be propagated instantaneously from S1 → S2 when a spike in NG occurs.
I’m looking for a way to reschedule the evaluation of SynapticPathway objects such that S1’s
on_post runs before S2’s
on_pre. Either that, or some clean way to bypass the scheduling issue all together. Naturally, I could add a
network_operation that checks for spikes on every time-step, but I worry that it would be too slow (given all the index wrangling that would be involved).
Thanks in advance for any help you folks can provide.