How to reset network monitors?

The problem

I want to reinitialize a set of monitors that are explicitly added to a Network object after a simulation time of length T/N. The reason is I’m recording a number of variables and things get quickly enormous. So I want to chop the whole simulation into N chunks, do light post-processing on them and save the important parameters, restart the monitors (clear all the recorded variables but keep the network state), and then continue the simulation for the next chunk.

I’ve realized from the previous conversations in the old google group that it often suffices to del those objects and redefine them again. However, I shamefully don’t know how to do it.

My code skeleton

I define my network as follows:

# imports and some configuration


net = Network(collect())

# let's define 10 groups
G = []
for i in range(10):
    Gi= NeuronGroup(...)

# let's connect them somehow
adj_mat = [[...]] # adjacency matrix
S = []
for i in len(adj_mat):
    for j in len(adj_mat):
        .... # some condition based on adj_mat
        Sij = Synapse(...)

# lets define monitors
mons = []
for k in range(K):
    mon_k = StateMonitor( ... , name=str(k)) # name for tracking monitors
# drink a coffee, come back and face your full RAM

# some post-processing
for k in range(K):
    fancy_output_from_mon_k = ...

# let's clean the monitors

# and add the originally define ones to the network again

# and finally redo the procedure N-1 other times

What I have tried

I tried going through items in mons (that contains all the monitors) and del them one by one. I also tried to remove the whole list mons. Neither freed up the space. It’s expected since mons or its contaminants are simply python objects. What I really have to delete are the objects that are added to net. I can see all the monitors in the net.objects. (Btw, I tried all this in Jupyter environment, although I don’t think it matters.)

So the question specifically would be how to modify network objects.

Any help or ideas that help me free up the space clogged by the monitors are appreciated. :slight_smile:

1 Like

import pickle
# Set up network
state_mon = StateMonitor(...)
run(...)  # a long run
data = state_mon.get_states(...)
with open('', 'w') as f:
    pickle.dump(data, f)
del state_mon
del data
state_mon = StateMonitor(...)
run(...)  # another long run

could you / have you tried this?

Hi @adam , thanks for your timely reply.

I updated my questions. In short, your suggestion would work perfectly if I had defined state_mon object like you did and had relied on the Brian’s magic to collect it upon run. But since I’m having a rather involved network, it makes more sense to define all groups, synapses, and monitors in a loop and add them to the network later. And because of this very addition, I don’t know anymore how to can I access and then remove/modify those monitors from the net object.

would something like



No. Strangely it didn’t. Nor it did free up the RAM.

After a simulation for some time net.remove(monitor_list) executes without any error/warning. However, when I try net.add(monitor_list) immediately after, I get a RuntimeError:

RuntimeError: syn_mon_0 has already been simulated, cannot add it to the network. If you were trying to remove and add an object to temporarily stop it from being run, set its active flag to False instead.

I have to correct my previous response. Indeed, net.remove(monitor_list) removes all the associated monitors from the network. What it doesn’t do is the memory release.

Turned out the missing step is redefining the list. By redefinition, python assigns a different reference to list object and which in turn can be added to the network without any problem. More importantly, it releases the memory.

So in summary:

  1. net.remove(monitor_list)
  2. redefine monitor_list from scratch
  3. net.add(monitor_list)

Thanks @adam for your help!


@arashgmn could you please post here a minimalistic code example? That’s handy information, and it will be great to have a working example for reference.

@mstimberg I wish Brian would have a button to reset buffers (a function, of course :smiley: ). Well, if it isn’t possible, maybe we can add some recipe “how to clean buffers” in official documentation.

@rth please try this.

Note: This structure writes on the memory. If you’re dealing with gigantic data or have a very old hard drive, this operation may be the bottleneck of your whole simulation!

from brian2 import *
import numpy as np

thr = -10
v_reset = -60 

# lets' make some populations
nPops = 3
pops = []
for n in range(nPops):
    pop = NeuronGroup(
            N = 5, 
            model= """
            dv/dt = (-v + I)/tau : 1
            I: 1
            tau: second
    pop.tau[:] = abs(np.random.normal(10, 3))*ms
    pop.v[:] = np.random.normal(-75, 3)
    pop.I[:] = 1*(pop.i+1)

# and connect them all together with synapses
syns = []
for l in range(nPops):
    for m in range(nPops):
        syn = Synapses(source = pops[l], target = pops[m],
                        model = """w : 1""", 
                        on_pre = """v += w""",
                        method = 'euler',
        syn.connect(condition = 'i!=j', p=0.1)
        syn.w = np.random.normal(0.5,0.02, size=len(syn))


net = Network(collect())  # collects nothing

# this function redefines the monitor objects conveniently 
def monitor_maker(syns):
    monitors = []
    for syn in syns:
        monitor = StateMonitor(syn, 'w', record=True)
    return monitors

# and this one just saves the recorded vars on the hard disk
def intra_sim_process(monitors, chunk_id):
    for monitor in monitors:'w_'+str(chunk_id)+'.npy', monitor.w)

# let's a run simulation of total duration 1 second. We break it down
# to 10 subsimulation of duration 100 ms to avoid memory cloggage 
for chunk in range(10):
    print("*"*10 + " CHUNK ID : {} ".format(chunk)+ "*"*10 + "\n")
    # add monitors
    monitors = monitor_maker(syns)
    net.add(monitors)*ms, report='text', report_period=20*ms)
    intra_sim_process(monitors, chunk) 

If you spotted anything wrong or saw potential improvements, please don’t hesitate to leave it as a reply below. I’d be thankful.


1 Like

@arashgmn thank you so much! Really useful.

This problem is well known, at least in parallel computing. The solution is pretty simple, fill 1/2 of memory with data, transfer data to a thread that saves data in the background and continue to compute, filling up the other half of the memory. This “butterfly” switch of memory requires that obtaining new data is slower than saving on “the slow hard disk”.

Just to give some more context: as @arashgmn correctly noted, removing monitors from the network will not free their memory, it will only mean that they are no longer simulated (same as when you set their active attribute to False). To free their memory, Python has to garbage collect the object itself. This happens when there is nothing referring to the object anymore – in your case, you overwrite the list and therefore all references to the monitor. In the case of a single monitor, you’d typically do del monitor_name.

I don’t know how much of this is just for illustration purposes, but a more efficient/compact version of your example code would only use a single NeuronGroup, a single Synapses object, and a single StateMonitor. And also note that you can change the time step of monitors, e.g. maybe storing the values of the synaptic weights only every 1ms would be enough (StateMonitor(..., dt=1*ms))? This would immediately reduce the memory usage by a factor of 10 :blush:

I agree that some kind of clear/reset/reinit method could be useful – I think one reason why we don’t have this so far is that it wouldn’t quite work in standalone mode. We do already mention it in the docs, though, as Adam linked earlier: Recording during a simulation — Brian 2 documentation. In general, I’d love to have Monitoring: allow for flexible storage options · Issue #298 · brian-team/brian2 · GitHub make these workarounds obsolete, but we still haven’t figured out all the details…