Apparent Crash in SpikeMonitor

Description of problem

I’m using Python v. 3.7.4, Brian2 v. 2.3.

Brian2 crashed part-way through a simulation, and at this point, I’m just looking for advice on how to narrow down the problem before you waste too much time on it. This appears to be a memory allocation problem in SpikeMonitor. I’ve pasted the error trace into this post, but I don’t know how to upload files (e.g. brian_debug_83cawzkw.log and the stdout/stderror outputs) in this new forum (the old one had a fairly obvious mechanism to do that).

Minimal code to reproduce problem

Don’t have that yet.

What you have aready tried

Since the problem was in SpikeMonitor, I’m assuming that eliminating those objects will solve, or at least change, the problem in some illuminating way. But I can’t really do that, since the point of the simulation is to see which neurons generated spikes.

Expected output (if relevant)


Actual output (if relevant)

brian_stdout_mz6c14d_.log and stdout/stderr files ready to upload - how do I do so?

Full traceback of error (if relevant)

ERROR Brian 2 encountered an unexpected error. If you think this is bug in Brian 2, please report this issue either to the mailing list at, or to the issue tracker at Please include this file with debug information in your report: C:\Users\Ajit\AppData\Local\Temp\brian_debug_83cawzkw.log You can also include a copy of the redirected std stream outputs, available at C:\Users\Ajit\AppData\Local\Temp\brian_stdout_mz6c14d_.log and C:\Users\Ajit\AppData\Local\Temp\brian_stderr_6qgambxr.log Thanks! [brian2]
Traceback (most recent call last):
File “”, line 1, in
File “D:\phd\research\model\”, line 22, in run
lpre=go1(1,10,0,20) # 20 runs, no training
File “D:\phd\research\model\”, line 50, in go1
File “D:\phd\research\model\”, line 323, in run,report=‘stdout’,report_period=15second)
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\core\”, line 280, in device_override_decorated_function
return func(*args, **kwds)
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\units\”, line 2392, in new_f
result = f(*args, **kwds)
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\core\”, line 1080, in run
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\core\”, line 183, in run
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\codegen\”, line 102, in call
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\codegen\runtime\cython_rt\”, line 153, in run
return self.compiled_code.main(self.namespace)
File “_cython_magic_6c48d5b404d275541c7ae658bad5aae2.pyx”, line 136, in _cython_magic_6c48d5b404d275541c7ae658bad5aae2.main
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\monitors\”, line 173, in resize
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\core\”, line 595, in resize
self.device.resize(self, new_size)
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\devices\”, line 482, in resize
File “C:\Users\Ajit\Anaconda3\lib\site-packages\brian2\memory\”, line 210, in resize
newdata = zeros(newdatashape, dtype=self.dtype)
ValueError: Maximum allowed dimension exceeded

What are the dimensions of newdata? The last line suggests a value error and guessing the array has reached the maximum size that can be handled without exceeding memory

That makes sense, since I’m recording a fair bit of extra data for debug. Since I’ve got a lot of memory on this machine (128 GB, less whatever Windows needs), is there a way to increase the amount of memory allocated to Brian2?


I am not sure on how to explicitly allocate more memory to Brian2, but you could try deactivating other monitors and clearing any huge arrays which are not used from the cache. Can the problem be reproduced by just having the object initialized and having it monitored using SpikeMonitor. I am not sure if this would help but take a look

Hi. @touches is right, this is certainly a memory problem. Note that the limit is lower than the total amount of memory on the machine, because it needs to reserve a single contiguous chunk of memory. I don’t think it is a question of assigning memory to Brian, except if you are running this on some kind of managed computing cluster that limits the resources that it assigns to processes.

Just to make sure: do you actually need the spike times or would it be enough to have the spike counts? If you only need the spike counts, you can use the SpikeMonitor with record=False (see the documentation), which will obviously require much less memory.

1 Like

In this case, I don’t think there will be much more useful information in these files actually. But in general this could of course be useful. I wasn’t aware before that the standard discourse settings only allow the upload of images. I changed the settings so that you should now be able to upload log files (and Python files) via the Upload button (upload)