I am encountering a very surprising problem. I am providing a current stimulus to network neurons for total of 4 sec. If I input current by specifying net.run(4000*ms), I get a different simulation outcome than if I input the identical current stimulus in consecutive segments net.run(800*ms) followed by net.run(3200*ms). Furthermore, the simulations look different if I break up the 4 sec differently: net.run(1000*ms) followed by net.run(3000*ms). Without going into specifics, I am exploring a chaotic network response. Is there a limitation in the Brian code that could cause this to happen?
Hi @MitchG_HunterCollege, this is indeed suprprising. We do have some tests in place that verify that splitting up the simulation time doesn’t make a difference – if this happens it is definitely a bug. Just to be sure: there is nothing else happening in between the run statements, i.e. you are literally replacing
net.run(4000*ms)
by
net.run(800*ms)
net.run(3200*ms)
? Is there anything particular about your model, or is it a “normal” mix of NeuronGroup and Synapses. Finally, are you using the default Cython runtime target, or are you setting the code generation preferences or the backend device (with set_device) to something else? There might be something related to the random number generation in Cython (cf. Move Cython random number generation to C++ mechanism · Issue #1664 · brian-team/brian2 · GitHub) that might explain your issue.
My model is a “normal” mix of NeuronGroup and Synapses. There is nothing unusual about how I am doing the time splits. These are actually 10sec simulation runs. At the 6sec mark, a strong current is applied to a subgroup of the PYR cells. This is specified as either a 4sec block, consecutive 0.2 and 3.8sec blocks, or 1 and 3sec blocks. No other properties are being re-specified between time segments. So the run specs are either:
PYR1.Ipulse= 4pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(6000ms)
PYR1.Ipulse= 28.9pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(4000ms)
OR
PYR1.Ipulse= 4pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(6000ms)
PYR1.Ipulse= 28.9pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(200ms)
PYR1.Ipulse= 28.9pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(3800ms)
OR
PYR1.Ipulse= 4pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(6000ms)
PYR1.Ipulse= 28.9pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(1000ms)
PYR1.Ipulse= 28.9pA
PYR2.Ipulse= 4pA
PYR3.Ipulse= 4pA
net.run(3000ms)
This is very perplexing. As I said, I am studying the appearance of a chaotic event. I wonder if any calculated values are “rounded off” at the end of a time segment before execution of a subsequent segment. (???)
Hi @MitchG_HunterCollege I tried a few things trying to reproduce this issue, but in whatever way I split up the simulations, I always got the exact same numbers (i.e. not only approximately the same which could make a difference in chaotic regimes). Just to make sure: each of these simulations is running in a new Python process “from scratch”, i.e. you are not testing these different splits in a single Jupyter notebook or script?
Could you please use Brian’s scheduling_summary function to get a list of all the objects that get executed and their order and post its output – maybe this gives a hint… To use it, add something like
print(scheduling_summary(net))
after the network has been constructed, e.g. before the first net.run.
This is very perplexing. As I said, I am studying the appearance of a chaotic event. I wonder if any calculated values are “rounded off” at the end of a time segment before execution of a subsequent segment. (???)
There is nothing special that happens with the values at the end of a segment, so this does not explain it. I suspected that it was related to random number generation, but in my tests nothing came up, same for synapses with delays (at the end of a segment, some spikes are still in the queue for delivery), they seem to work correctly. I have to admit I am a bit out of ideas for now…
This anomaly will have to sit for awhile, as I have other commitments. I will do more investigation soon, including your suggestion to generate a scheduling summary. Thanks, Marcel.