Synaptic delays shorter than integration time step


How does Brian handle synaptic delays that are shorter than the integration time step?



Hi Sebastian,

delays are represented as integer multiples of the integration time step, and are rounded to the closest integer multiple. That means if your delay is < 0.5 * dt, it will be zero and if it is >= 0.5 * dt, it will be dt.



Hi Denis!

Thanks. I was expecting something along that lines. Do you think Brian should issue a warning or output the effective delays?



I guess emitting a warning for delays rounded down to zero is feasible, but it’s not trivial since the conversion happens only when run is called (and for C++ standalone mode it happens in C++ code). There are also some subtleties: e.g. if your delays are, say, 0.5ms, and you run for the first 100ms with a dt of 1ms (to quickly settle down things into a steady state), and then change the dt to 0.1ms, the first run will use no delay, whereas the second run will use delays of 5 time steps.

1 Like