Saving data to disk in the middle of a simulation in standalone mode

Description of problem

Hello everyone. I am running a very long simulation that is RAM-intensive. In my current code, I am not using the cpp standalone code, but I am translating my model to run on a GPU using the brian2cuda module. Currently, I have code that looks something like the code snippet below (for brevity, equations and the total number of groups/monitors are omitted, as the specific equations do not matter in this case).

The problem I am facing is during the `np.save()’ command, as Brian2 mentions that in standalone mode it is not possible to access the values of state variables before the simulation has been run (NotImplementedError).

I would say that I am a bit confused with the documentation because I do not wish to run independent simulations, but rather split on a single simulation into multiple run() calls to optimize resource usage. Unfortunately, the model has tens of thousands of neurons (and millions of synapses) and it would be impossible to run 10-second simulations in one go, especially if I am to monitor all important information. Any advice or pointers to the right direction are more than welcome! Thanks in advance.

Minimal code to reproduce problem

# Neuron Group initialization
G = NeuronGroup(...)

# Make a network and add the group 
net = Network()
net.add(G)

# Preparation for simulations
t_run = 10 * second
t_step = 0.1 * second

start = time.time()
while t_run > 0*second

  # Volatile monitor Initialization
  M = StateMonitor(G,  'v', record=True)
  net.add(M)

  # Run a simulation step
  net.run(t_step, report='text', report_period=5*second, profile=True)
  t_run -= t_step

  # Write data to disk
  with open('data.txt', 'a') as f:
    np.savetxt(f, M.v, fmt='%.8f')
    f.write('\n')

  # Remove the monitor from the net and delete it
  net.remove(M)
  del M

# Mandatory build call for multiple run() calls
device.build(directory='output', compile=True, run=True, debug=False

What you have already tried

I followed the solutions provided here and on the Brian2 User’s Guide - Freeing up memory in long recordings.

Expected output (if relevant)

Actual output (if relevant)

Full traceback of error (if relevant)

Hi @nvar. Indeed, this approach does not work, since all the run calls will end up together in a single C++/CUDA file – you cannot run Python code in between. This is a topic that we’ve been discussing for quite a while, unfortunately we still do not have an elegant solution for it (if anyone wants to work on it, I’d be happy to give a few pointers :blush: ). As a minor note: the StateMonitor is recording everything with the default time step – quite often this is excessive, e.g. using dt=1*ms would directly reduce the necessary memory by 10.

But as a complete solution/workaround, for now, the best approach would be to write your own custom implementation of a recording function in C++, instead of using the StateMonitor. Slightly adapted from my answer here, you could add the following function definition to your script:

@implementation('cpp','''
// Note that functions always need a return value at the moment
double store_state(int i, double time, double value) {
    static std::ofstream f("/some/path/values.txt");  // opens the file the first time
    f << i << "\t" << time << "\t" << value << "\n";
    return 0.;  // unused
}
'''
@check_units(i=1, t=second, value=volt, result=1)
def store_state(i, t, value):
    raise NotImplementedError('Use standalone mode')

Instead of adding a StateMonitor, your script would call the function like this:

G.run_regularly("dummy = store_state(i, t, v)", dt=defaultclock.dt)

This will call your function at every time step for every neuron, and write the values to a text file.

(I didn’t run the above function, so there might be minor errors)

Does that sound that a possible approach for you?

Hi @mstimberg , thank you very much for the prompt reply. Both solutions sound good, I will probably end up implementing both. I’ll start working on it and run some tests with the implementation directive and post updates on my progress! Many thanks!

1 Like

I just noted that I copy&pasted part of the function definition in the wrong place, please see the updated version above.