Delay Plasticity Only Working in One Direction

Description of problem

I’m working on a synapse model to adjust synaptic delays in the same way that STDP would adjust synaptic weights. It appeared to be working, since synaptic delays were changing, but on further inspection they are only adjusted in one direction (one of the two coefficients does not affect the result at all). These are coded the same way, so I don’t know why one is working and the other isn’t.

Minimal code to reproduce problem

#Here is the synapse model:

class delaySTDP:
    eqns = '''
    w : 1
    dapre/dt = -apre/tau_pre : second (event-driven)
    dapost/dt = -apost/tau_post : second (event-driven)
    xdist : 1
    ydist : 1
    zdist : 1
    dist : 1
    '''
    on_post='''
    apost += Apost
    delay = clip(delay+apre, delaymin, delaymax)
    '''
    on_pre='''
    I_post += w
    apre += Apre
    delay = clip(delay+apost, delaymin, delaymax)
    '''

#which I initiate in the network with:
tau_pre = 16.8*ms
tau_post = 16.8*ms 
Apre = -0.0001*ms
Apost = 0.0001*ms

G = NeuronGroup(N, eqns, threshold = thresh, reset = res, method = 'euler')
SE = Synapses(G, G, delaySTDP.eqns, on_pre = delaySTDP.on_pre, on_post = delaySTDP.on_post)
SI = Synapses(G, G, delaySTDP.eqns, on_pre = delaySTDP.on_pre, on_post = delaySTDP.on_post)

#I have stimulation into the network and connect both synapse groups based on distance. I also add a distance dependent initial delay to the synapses

What you have aready tried

I’ve tried setting Apost to zero, which results in no changes to synaptic delays. Setting Apre to zero (with nonzero Apost) results in synaptic delays increasing as expected. Changing the sign of Apost yielded synapses that decreased in delay, so I know that it is possible to modulate them in both directions.

I’ve changed the names of variables and rewrote the synapse model multiple times, using a working STDP model as reference, but I keep getting the same issue. The lines of the code that change the synaptic delays using Apre/apre (as written above) do not work, while Apost/apost work exactly as expected.

My concern is that, since they are coded in the same way, there is no reason one should work while the other doesn’t. Am I missing something?

Expected output (if relevant)

Synaptic delays changing in both directions (decreasing and increasing) based on the time correlation between pre and post-synaptic firing.

Actual output (if relevant)

Synapses only change in one direction, based only on the sign of Apost (as written above).

Full traceback of error (if relevant)

No actual error message occurs. The synapse simply isn’t working as expected.

Hi @butlerK, this behaviour is unfortunately entirely non-obvious. Currently, Brian’s correct behaviour would have been to raise an error, actually, since you cannot change the delays during a run… This is a feature that would be nice to have, but no one found the time to look into this in detail yet (see Delays as variables and the linked GitHub issue). The reason why this is not possible at the moment is that delays are translated into integer delays (i.e. timesteps) at a beginning of a simulation, and these values are used as the actual delays. As I mention in the discussion linked above, we did this to avoid repeatedly doing the “time delay” → “timestep delay” conversion, but I am not entirely sure whether this is actually saving any meaningful amount of time. If it isn’t, we could remove this optimization and would get delay plasticity for free! In the other discussion I mention the only possible workaround at the moment: use a “batch” approach where you simulate for a certain time and accumulate the delay changes in a variable, without actually changing the delays during the run. Then, update the actual delays based on that variable and continue the simulation. Given that the delay changes are probably small and need a while to sum up before they make a difference, this might not introduce a very big error.

All this does of course not explain your observation that the delay changes only work “in one direction” (but to emphasize again, the changes in the delay variable during a run do not change the actual synaptic delay that is used). The reason is that in a synapse with a pre and post pathway there are two delays, the pre and the post delay, i.e. the time delay after a pre-synaptic/post-synaptic spike, before the on_pre/on_post statement is triggered. Since most synapses do only have the pre pathway, you can directly write S.delay for a Synapses object S, but this is a shorthand for S.pre.delay. The delay for the post pathway does not have such a shorthand, so you always have to access it as S.post.delay. Within code, such as on_pre or on_post, the name delay refers to the delay of the respective pathway. When you said you saw increasing delays from the update in the on_pre but not from the on_post statements, I guess you verified SE.delay, so effectively SE.pre.delay ­– if you have a look at SE.post.delay you should see that it changed as well. But, alas, to repeat myself: seeing these changes does not mean that they are actually influencing the actual simulation delays :frowning:

Hope that cleared things up!

PS: Python code becomes more readable in the forum if you enclose it in three backticks like this:

```
# This is Python code
print("something")
```

I edited your post accordingly.

Hi, thank you so much for your thorough response.

If you stop and continue a simulation repeatedly, would said delay changes then be implemented? I run and stop my simulations at regular intervals and have seen dynamical changes in my network. You mentioned that changes in the delay variable aren’t implemented continuously, but could be changed using a “batch” approach. Could this be why I’m seeing the plasticity rule actually affect my network?

Ah, yes, if you are already doing multiple runs, then it will take the changed delay variable value at the end of a run and turn it into the actual simulation delays for the following run.
Please note that this behaviour might change with future Brian versions, though : we might raise an error (because the delay is supposed to be constant), or actually take into account delay changes during the run.