Hi !
I need to learn afferent synaptic delay with some local learning rule, on the generation of a postsynaptic spike.
So it means that a delay shall be considered as a variable, and shall be locally updated only when the postsynaptic neuron spike.
However, Brian2 doesn’t support variable synaptic delay anymore.
The issue has been mentioned here with less details (and there are also related posts):
https://github.com/brian-team/brian2/issues/1297
One proposition to solve the problem was:
instead of changing the delay variable, “store” the changes that you want to do in a variable
delta_delay
for example. Then, stop your simulation every 100ms, 1000ms, or whatever you see fit and apply thedelta_delay
to the actual delay, then continue the simulation.
However, I see two issues with this approach:
- Updating the whole set of synaptic delays of the network each T ms breaks the event nature of the delay learning rule, and may cause malfunction (e.g. if a neuron spike more than once during the time interval T). Because I don’t know the duration of the patterns that are fed in the network, the period between two updates of the delays shall be arbitrarily small.
- If I need to store/restore the network and run the simulation for very small durations, then there would be strong efficiency issues, as the simulation would become very slow. Moreover, I suppose that I won’t be able to use the standalone code generation in c++. This will result in dramatic drop in simulation speed.
So are there any solution with Brian2 for this problem that guarantees reasonable simulation speed ?
If not, should I use a specific version of Brian to get the dynamic delay feature during runtime ?
Thanks !