Description of problem
Hi Brian2 Team! I am building a model of cortical columns. Each column has 225 neurons, 12 subgroups and 68 different synapse definitions. I can run a 1.2 second simulation of a pair of columns in about 45 seconds using Brian2 and numpy on a regular CPU. But I am trying to speed this up so I can run 128 columns as fast as possible.
I’ve got it working using Brian2Genn on a TitanV GPU with 12GB onboard memory, running CUDA version 9.1.85. This takes about 120 seconds for a pair of columns, but I can only run a maximum of 64 columns before I get a RuntimeError (Command [’.main’,‘test’,‘1.2’] failed with error code -11) which says it “died with <Signals.SIGSEGV: 11>”.
Minimal code to reproduce problem
Code is quite long, but can be found at https://github.com/MunozatABI/CorticalModel. Specifically the column_b2genn.py file.
What you have already tried
I was able to run 64 columns on the CPU, it takes about 160 seconds. I can also run it on cython, but it takes about twice as long as numpy. Had a look at the debug files but couldn’t make sense of it.
The brian_debug_8w90b8q0.log, brian_script_1y0d2dcq.py, brian_stderr_ftm3sh_d.log, brian_stdout_zm27gccy.log can also be found here.
Sorry, I’m not much of coder! I’m a bit confused as to why cython and brian2genn is not making my code any faster. Any advice you have on this would be greatly appreciated.