Hi everyone!
In my brian script, net.run() has been placed in a loop. It takes long hours and I want to use “Stand alone mode” to gain a higher speed.
my code is like this :
at the beginning of your script (e.g. right after the import statements). After the loop, you will then have to add
device.build()
which actually compiles and runs the simulation. But note that in “standalone mode”, calling net.run several times causes quite a bit of overhead in the way the code is generated. This is fine for a couple of runs, but if you really call net.run 1000 times, this will most likely not work. Why do you need to call net.run repeatedly, maybe there are ways to reformulate your model so that it works with a single run (e.g. with something like run_regularly)?
Actually, I am working on this code
and in original version, it has been implemented with calling net.run() as many as input images(MNIST : 60000 trainSet). I only managed to run it properly. Changing this code is so difficult.
I wonder if you could take a look at the run() section of the code and do the best solution to change with focus on increasing speed of run.
As always, I would really appreciate your help and guide.
Hi @Rihana . Unfortunately in this case you won’t be able to use standalone mode, since the code needs to know about the results after every run before doing the next run. This is a typical “runtime mode” behaviour – in “standalone mode”, all simulations are turned into a single C++ project that is executed, i.e. you cannot run any Python code in between runs.
The code you linked to is for Brian 1, but I guess you adapted it to Brian 2 (or based it on this code: GitHub - zxzhijia/Brian2STDPMNIST: Brian 2 version of Paper "Unsupervised Learning of digit recognition using STDP") ? There are probably ways to make this code faster while staying in runtime mode, but it is difficult to say like that. In your code, is most time actually spent in the simulation itself, or in the preparation time in-between two simulations? Actually, I recently made a little change in Brian that should reduce the time needed between runs by a small amount. The difference should be small, but if you have 60000 runs, it might make a meaningful difference. You can try it out by installing Brian’s development version as described here: Installation — Brian 2 2.5.1 documentation
Hi again!
Yes, I have used the code based on Brian2 in the link mentioned. Of course, it doesn’t work and has some errors, so I corrected it and improved this algorithm.
in the beginning of the run, the speed is better but after around 100 images (inputs) it gradually gets slower and slower.
the most time is relevant to simulation process.
the benchmark dataset is MNISt including 60000 images(TrainSet) and 10000 for TestSet.
I will install Brian’s development version and check the run time. after that comment here.
Hi @Rihana. It isn’t normal that it gets slower and slower… Two things I could imagine that might be happening: 1) the weights change in a way that the total number of spikes in the network increases a lot, therefore simulations take longer 2) recordings (i.e. monitors) take up all your memory (but in this case it should probably crash before reaching the end)
Hi @mstimberg . About 1 spike counts are almost sparse in the second layer (affected by inputs).
Only monitors relevant to neurons are recorded which it hasn’t had any crash because of memory loss. But if I added synapses to my monitors it would occur. so, I considered just a specific index of synapses during monitors.
I think gradually slowing is because of monitors. when I remove some Monitors my speed increases a bit.
Your change mentioned above affected a bit my running time.