Adapting synaptic delay on postsynaptic spike

Hi !
I need to learn afferent synaptic delay with some local learning rule, on the generation of a postsynaptic spike.
So it means that a delay shall be considered as a variable, and shall be locally updated only when the postsynaptic neuron spike.
However, Brian2 doesn’t support variable synaptic delay anymore.
The issue has been mentioned here with less details (and there are also related posts):

One proposition to solve the problem was:

instead of changing the delay variable, “store” the changes that you want to do in a variable delta_delay for example. Then, stop your simulation every 100ms, 1000ms, or whatever you see fit and apply the delta_delay to the actual delay, then continue the simulation.

However, I see two issues with this approach:

  1. Updating the whole set of synaptic delays of the network each T ms breaks the event nature of the delay learning rule, and may cause malfunction (e.g. if a neuron spike more than once during the time interval T). Because I don’t know the duration of the patterns that are fed in the network, the period between two updates of the delays shall be arbitrarily small.
  2. If I need to store/restore the network and run the simulation for very small durations, then there would be strong efficiency issues, as the simulation would become very slow. Moreover, I suppose that I won’t be able to use the standalone code generation in c++. This will result in dramatic drop in simulation speed.

So are there any solution with Brian2 for this problem that guarantees reasonable simulation speed ?
If not, should I use a specific version of Brian to get the dynamic delay feature during runtime ?

Thanks !

Hi again. Note that this topic is a bit of a duplicate of Delays as variables.

I am not sure what biological mechanism you are modelling that is capable of changing a delay between two spikes – I imagine mechanisms that change delays to be rather slow since they involve structural changes in the axon or similar things. So what I had in mind was something akin to “batch learning” – summing up many very small changes over some time, and then only applying them when they become meaningful.

Another approach would be to not have a proper delay learning rule in the first place, but instead use several synapses with different delays and use a learning rule to select between them (as in this paper:

Apart from my suggestions above, I am afraid no. And while some of this was in principle possible with Brian 1, there are so many things that it does not support (Python 3, C++ code generation, …) that I cannot recommend using it.

The only solution that would give you exactly what you want would be to work on Allow time varying delays in Synapses as an option · Issue #355 · brian-team/brian2 · GitHub – it is definitely on our list (and has been there for a long time), but we are severely limited in the time we can spend on this.