Memory error insufficient disc space

I am using a PC laptop with ~200 GB free space for running network simulations, and I am encountering a memory limitation problem. The model of 2000 PYR neurons and 20 PVN neurons will successfully run for 5 seconds with defaultclock.dt=0.015 msec, but fails for longer runs (e.g. 10 sec). I get a memory error (see below). If the problem is insufficient disc space for the generated arrays, can I command data files be generated on an attached external drive (if so, how)? And since the external drive is connected via USB cable, will this dramatically slow down the real time for the run?

Saved run parameters:
PYR_v = StateMonitor(PYR,‘v’,record = True)
PYR_u = StateMonitor(PYR,‘u’,record = True)
PYR_f = StateMonitor(PYR,‘f’,record = True)
PYR_Isyn_e_pyr = StateMonitor(PYR,‘Isyn_e_pyr’,record= [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19])
PYR_Isyn_i_pyr = StateMonitor(PYR,‘Isyn_i_pyr’,record= [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19])
PVN_Isyn_e_pvn = StateMonitor(PVN,‘Isyn_e_pvn’,record= [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19])
PVN_Isyn_i_pvn = StateMonitor(PVN,‘Isyn_i_pvn’,record= [0])
PYR_spktimes = SpikeMonitor(PYR, record=True)
PVN_v = StateMonitor(PVN,‘v’,record = True)
PVN_u = StateMonitor(PVN,‘u’,record = True)
PVN_f = StateMonitor(PVN,‘f’,record = True)
PVN_spktimes = SpikeMonitor(PVN, record=True)
duration = 10 * second

MemoryError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\codegen\runtime\numpy_rt\numpy_rt.py:279, in NumpyCodeObject.run_block(self, block)
278 try:
→ 279 exec(compiled_code, self.namespace)
280 except Exception as exc:

File (string):6

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\monitors\statemonitor.py:342, in StateMonitor.resize(self, new_size)
341 for var in self.recorded_variables.values():
→ 342 var.resize((new_size, self.n_indices))

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\core\variables.py:648, in DynamicArrayVariable.resize(self, new_size)
647 if self.resize_along_first:
→ 648 self.device.resize_along_first(self, new_size)
649 else:

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\core\variables.py:648, in DynamicArrayVariable.resize(self, new_size)
647 if self.resize_along_first:
→ 648 self.device.resize_along_first(self, new_size)
649 else:

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\devices\device.py:527, in RuntimeDevice.resize_along_first(self, var, new_size)
526 def resize_along_first(self, var, new_size):
→ 527 self.arrays[var].resize_along_first(new_size)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\memory\dynamicarray.py:144, in DynamicArray.resize_along_first(self, newshape)
143 else:
→ 144 newdata = np.zeros(tuple(final_new_shape), dtype=self.dtype)
145 slices = getslices(self._data.shape)

MemoryError: Unable to allocate 15.6 GiB for an array with shape (1048576, 2000) and data type float64

The above exception was the direct cause of the following exception:

BrianObjectException Traceback (most recent call last)
Cell In[12], line 33
31 PYR2.Ipulse= 5 * pA
32 PYR3.Ipulse= 5 * pA
—> 33 net.run(9800*ms)
34 #PYR1.Ipulse= -2000 * pA
35 #PYR2.Ipulse= -2000 * pA
36 #PYR3.Ipulse= -2000 * pA
(…)
79 #PYRvoltage_average = np.mean(PYR_v.v, axis=0)
80 #print(type(PYRvoltage_average))
81 PYRipsc_average = np.mean(PYR_Isyn_i_pyr.Isyn_i_pyr, axis=0)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\core\base.py:335, in device_override..device_override_decorator..device_override_decorated_function(*args, **kwds)
333 return getattr(curdev, name)(*args, **kwds)
334 else:
→ 335 return func(*args, **kwds)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\units\fundamentalunits.py:2780, in check_units..do_check_units..new_f(*args, **kwds)
2770 error_message = (
2771 f"Function ‘{f.name}’ "
2772 “expected a quantitity with unit "
2773 f”{unit} for argument ‘{k}’ but got "
2774 f"‘{value}’"
2775 )
2776 raise DimensionMismatchError(
2777 error_message, get_dimensions(newkeyset[k])
2778 )
→ 2780 result = f(*args, **kwds)
2781 if “result” in au:
2782 if isinstance(au[“result”], Callable) and au[“result”] != bool:

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\core\network.py:1207, in Network.run(self, duration, report, report_period, namespace, profile, level)
1205 else:
1206 for obj in active_objects:
→ 1207 obj.run()
1209 timestep[0] += 1
1210 t[0] = timestep[0] * dt[0]

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\core\base.py:225, in BrianObject.run(self)
223 def run(self):
224 for codeobj in self._code_objects:
→ 225 codeobj()

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\codegen\codeobject.py:118, in CodeObject.call(self, **kwds)
115 self.update_namespace()
116 self.namespace.update(**kwds)
→ 118 return self.run()

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\codegen\codeobject.py:146, in CodeObject.run(self)
136 def run(self):
137 “”"
138 Runs the main code in the namespace.
139
(…)
144 defined during the call of CodeGenerator.code_object.
145 “”"
→ 146 return self.run_block(“run”)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\brian2\codegen\runtime\numpy_rt\numpy_rt.py:291, in NumpyCodeObject.run_block(self, block)
289 tb = tb.tb_next # Line in the code object’s code
290 message += f"{lines[tb.tb_lineno - 1]}\n"
→ 291 raise BrianObjectException(message, self.owner) from exc
292 # output variables should land in the variable name _return_values
293 if “_return_values” in self.namespace:

BrianObjectException: Error encountered with object named ‘statemonitor’.
Object was created here (most recent call only, full details in debug log):
File ‘C:\Users\goldf\AppData\Local\Temp\ipykernel_24748\329731396.py’, line 2, in
PYR_v = StateMonitor(PYR,‘v’,record = True)

An exception occured during the execution of the ‘run’ block of code object statemonitor_codeobject.
The error was raised in the following line:
_owner.resize(_new_len)

Dear @MitchG_HunterCollege, the issue here is not disk space but memory in the sense of RAM. This is a serious limitation at the moment: all recorded values are stored in RAM before being written out to the disk in the end. The required memory can be quite big, for example, recording 2000 neurons with a timestep of 0.015msf for 10s, means 2000 × 666666 values of 8 bytes (double precision) each ≈ 10GB for each recorded variable… Until we support directly recording to disk, there are some rather complex workarounds like the one mentioned here: Saving data to disk in the middle of a simulation in standalone mode

I think the best approach for now is to choose a bigger time step for the recordings with the dt parameter (e.g. StateMonitor(..., dt=1*ms) and/or record fewer neurons. Changing from double to single precision (using StateMonitor(..., dtype=np.float32) will also half the required memory. Hope that gives some leads!

Hello, Marcel. Thank you for explaining this as a RAM problem. I will try each of the options you have suggested, and report back which method(s) allow for longer runs without data distortion.