How can Brian2 be used to realize a time-series prediction?

Description of problem

The goal of my project is to use Oger Extended Toolkit (GitHub - tilemmpon/Extended-OGER: Enriching ORGANIC Reservoir Computing Engine (Oger) with new models of the ESN state of the art) to realize a time-series prediction. I want to use BrianIFReservoirNode as network layer. Originally, the implementation was written using Brian1, and I have adjusted it to use Brian2. After running into multiple small issues and adjusting the code, now I am stuck with the following error:

RuntimeError: neurongroup has already been simulated, cannot add it to the network. If you were trying to remove and add an object to temporarily stop it from being run, set its active flag to False instead.

This is raised when I am trying to test the network on my test set. Before that, network is trained with success. I tried to use store/restore between processes, but the error is still there and I`m not sure how to deal with it.

Minimal code to reproduce problem

My source code is the following:

First, the BrianIFReservoirNode class (ignore the hardcoded values, because first I just need to make it run):

class BrianIFReservoirNode(mdp.Node):

def __init__(self, input_dim, output_dim, dtype, input_scaling=100, input_conn_frac=.5, dt=1, we_scaling=2, wi_scaling=.5, we_sparseness=.1, wi_sparseness=.1):

    super(BrianIFReservoirNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype)

    self.taum = 20 * brian.ms

    self.taue = 5 * brian.ms

    self.taui = 10 * brian.ms

    self.Vt = 15 * brian.mV

    self.Vr = 0 * brian.mV

    self.frac_e = .75

    self.input_scaling = input_scaling

    self.input_conn_frac = input_conn_frac

    self.dt = dt * brian.ms

    self.we_scaling = we_scaling

    self.wi_scaling = wi_scaling

    self.we_sparseness = we_sparseness

    self.wi_sparseness = wi_sparseness

    self.eqs = brian.Equations('''dV/dt  = (I-V+ge-gi)/taum : volt

                                  dge/dt = -ge/taue : volt

                                  dgi/dt = -gi/taui : volt

                                  I = I_recorded: volt''',taum = self.taum, taue = self.taue, taui = self.taui)

    th = 'V > ' + str(self.Vt)

    rst = 'V = ' + str(self.Vr)

    th = 'V > 15*mV'

    rst = 'V = 0*mV'

    self.G = brian.NeuronGroup(N=output_dim, model=self.eqs, threshold=th, reset=rst,method='euler')

    #self.Ge = self.G.subgroup(int(scipy.floor(output_dim * self.frac_e))) # Excitatory neurons

    #self.Gi = self.G.subgroup(int(scipy.floor(output_dim * (1 - self.frac_e))))

    self.Ge = self.G[:int(scipy.floor(output_dim * self.frac_e))]

    self.Gi = self.G[int(scipy.floor(output_dim * (1-self.frac_e))):]

    #self.internal_conn = brian.Connection(self.G, self.G)

    self.we = self.we_scaling * scipy.random.rand(len(self.Ge), len(self.G)) * brian.nS

    self.wi = self.wi_scaling * scipy.random.rand(len(self.Gi), len(self.G)) * brian.nS

    self.internal_conn = brian.Synapses(self.G,self.G, 'w : siemens', on_pre='ge += w')

    self.internal_conn.connect()

    #self.internal_conn.w = 1 * brian.nS

    #self.Ce = brian.Connection(self.Ge, self.G, 'ge', sparseness=self.we_sparseness, weight=self.we)

    #self.Ci = brian.Connection(self.Gi, self.G, 'gi', sparseness=self.wi_sparseness, weight=self.wi)

    self.Ce = brian.Synapses(self.Ge,self.G,'w : siemens',on_pre='ge += w')

    srcCe,trgCe = self.we.nonzero()

    self.Ce.connect(p=self.we_sparseness,i=srcCe,j=trgCe)

    #self.Ce.w = 1 * brian.nS

    #TODO: add weights

    self.Ci = brian.Synapses(self.Gi,self.G,'w : siemens',on_pre = 'gi += w')

    srcCi,trgCi = self.we.nonzero()

    self.Ci.connect(p = self.wi_sparseness,i=srcCi,j=trgCi)

    #self.Ci.w = 1 * brian.nS

    #TODO: add weights

    #self.internal_conn.connect(self.G, self.G, self.w_res)

    #dt_obj = 10 * brian.ms

    self.Mv = brian.StateMonitor(self.G, 'V', record=True)

    self.Ms = brian.SpikeMonitor(self.G, record=True)

    self.w_in = self.input_scaling * (scipy.random.rand(self.output_dim, self.input_dim)) * (scipy.random.rand(self.output_dim, self.input_dim) < self.input_conn_frac)

    #I_recorded = brian.TimedArray(100 * scipy.dot(x, self.w_in.T) * brian.mV, dt=1 * brian.ms)

    self.network = brian.Network(self.G, self.Ce, self.Ci, self.Ge, self.Gi, self.Mv, self.Ms)

    #

def is_trainable(self):

    return False

def is_invertible(self):

    return False

def _get_supported_dtypes(self):

    return ['float32', 'float64']

def _execute(self, x):

    #self.G.I = brian.TimedArray(10000 * x * brian.mV, dt=1 * brian.ms)

    #I_recorded = brian.TimedArray(10000 * x * brian.mV, dt=1 * brian.ms)

    I_recorded = 100 * brian.mV

    #I_recorded = brian.TimedArray(100 * scipy.dot(x, self.w_in.T) * brian.mV, dt=1 * brian.ms)

    #self.G.I = brian.TimedArray(100 * scipy.dot(x, self.w_in.T) * brian.mV, dt=1 * brian.ms)

    self.network = brian.Network(self.G, self.Mv, self.Ms)

    self.network.store(filename="reservoir.pk")

    #self.network.restore()

   

    self.network.run((x.shape[0] + 1) * brian.ms)

    return self.Mv.V[:, 0:x.shape[0]].T

The main function:

import mdp
import Oger.nodes.spiking_nodes as lsm

    readout = RidgeRegressionNode(0.001)


    reservoir = lsm.BrianIFReservoirNode(input_dim, output_dim=res_size, input_conn_frac=.5, input_scaling=100, dt=1, dtype=float, we_scaling=1, wi_scaling=1, we_sparseness=.05, wi_sparseness=.1)


    mnnode = Oger.nodes.MeanAcrossTimeNode()

    flow = mdp.Flow([reservoir, readout, mnnode], verbose=1)

    #flow = fl([reservoir, readout,mnnode],freerun_steps = 'non-teacher-forced)')

    #data = [None, zip(train_x,train_y),None]

    data = [None,zip(train_x,train_y),None]

    blockPrint()

    t0=time.time()

                   

    flow.train(data)

    train_time = time.time() - t0

   

    flow[0].network.restore(filename="reservoir.pk")

    t0=time.time()

    y_hat = flow(test_x)

    predi_time = time.time() - t0

The error occurs at line : y_hat = flow(text_x)

What you have aready tried

So far, I have tried the followings:

  1. Use store after network is created and then restore after training => same error
  2. set active flag for neurongroup on false after training => same error

Expected output (if relevant)

The expected behavior is to pass the testing phase with success.

If someone is kind to help me, I can provide other info as well as source files if this helps

Actual output (if relevant)

Full traceback of error (if relevant)

Traceback (most recent call last):
File “”, line 1, in
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\linear_flows.py”, line 437, in call
return self.execute(iterable, nodenr=nodenr)
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\linear_flows.py”, line 369, in execute
res.append(self._execute_seq(x, nodenr))
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\linear_flows.py”, line 350, in _execute_seq
self._propagate_exception(e, i)
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\linear_flows.py”, line 125, in propagate_exception
raise FlowExceptionCR(errstr, self, except
)
FlowExceptionCR:

! Exception in node #0 (BrianIFReservoirNode):
Node Traceback:
Traceback (most recent call last):
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\linear_flows.py”, line 348, in _execute_seq
x = flow[i].execute(x)
File “”, line 1, in
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\mdp\signal_node.py”, line 647, in execute
return self._execute(self._refcast(x), *args, **kwargs)
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\Oger\nodes\spiking_nodes.py”, line 99, in _execute
self.network = brian.Network(self.G, self.Mv, self.Ms)
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\brian2\core\network.py”, line 384, in init
self.add(obj)
File “D:\kits\Anaconda\Anaconda3\envs\Oger_LSM\lib\site-packages\brian2\core\network.py”, line 483, in add
% obj.name)
RuntimeError: neurongroup has already been simulated, cannot add it to the network. If you were trying to remove and add an object to temporarily stop it from being run, set its active flag to False instead.

Thank you for your time!

Hi @AndreiMargin . Each time you create a new Network, Brian assumes that you want to start a new simulation from scratch (i.e. at time 0s). In your case, each call to the _execute method creates a new Network, but you tryo to add a NeuronGroup to it that has already been simulated before (i.e. its internal clock is no longer at 0s). This leads to the error you are seeing.
I am not quite sure why you are creating a new Network in the _execute method. Do you want to continue the simulation where it ended previously? In that case, simply use self.network.run with the existing Network. Or do you want to “reset” the network to its initial values? In that case you’d do a self.network.store after creating the Network in __init__ and then do a restore every time in _execute.
Hope that makes things clearer, best
Marcel

PS: They are still commented out, but note that TimedArray has to be used quite a bit differently in Brian2. We have some detailed conversion notes covering this topic.

1 Like

Thank you very much for your explanations. I have managed to run the simulation (training and testing), but now I have another problem which I don`t know for sure if this is related to my Brian network or something else in my code.

What Im doing now is training my network on a training set and then apply it on a test set. But Im getting the same output for every input value, when trying to do the prediction of a signal. My Liquid State Machine network is composed by 1 Brian network and 1 simple Ridge Regression output layer. I`m not sure if I need to restore the network after every call, I have tried both ways and have the same output (same value) => in consequence my predicted signal is a flat line, which of course is not the expected one.

Can these be influenced by the Equations used in the neurongroup? I think Im a little bit confused here and dont know for sure which is the simplest and basic equation or set of equations which can be used to have a functional network (because I don`t think mine is functional in this state)

Hi. If you want to start the network from the same initial state before each input, then yes, you’d need to restore it before each call. Could you share the code that sets the input? I guess you are somehow providing the same input for every run, and therefore your network produces the same output.

I have an array of values representing bitrate values over a period of time. Basically, I`m doing something like this to load my data:

sData, ma, mi, me = scale(data)
supData = series_to_supervised(sData, n_in, n_out)
train,test = train_test_split(supData, n_test)
train_x, train_y = [np.array(train[:, :n_in])], [np.array(train[:, n_in:])]
test_x, test_y = [np.array(test[:, :n_in])], [np.array(test[:, n_in:])]

Code above will split may data into train set and test set

readout = RidgeRegressionNode()
reservoir = lsm.BrianIFReservoirNode(input_dim, output_dim=res_size, input_conn_frac=.5, input_scaling=100, dt=1, dtype=float, we_scaling=1, wi_scaling=1, we_sparseness=.05, wi_sparseness=.1)
mnnode = Oger.nodes.MeanAcrossTimeNode()
flow = mdp.Flow([reservoir, readout,mnnode], verbose=1)

My network is composed by a Brian reservoir network and it has a simple readout

data = [None,zip(train_x,train_y),None]
flow.train(data) #train phase

Below is the testing phase, which for every distinct input, I get the same output. Ignore that reshape, I have to arrange the input sample as internal network expects.

for testSampleX in test_x[0]:

        sample = np.reshape(testSampleX,(10L,1L)).T

        y_hat.append(flow(sample))

When this is done, y_hat is an array with identical values (y_hat should be the predicted values). Here, I loop on all values in my test set array becaues if I feed the network with the entire test set, It gives my just one value in return, while I expect an full array with the predicted values. This is why I was trying to loop over them, because I thought that this is how Brian node works, maybe it needs to run separately o every single value. Sorry if Im saying something stupid here, Im trying to learn how to use it but I could not find anywhere an example on how to use it in a prediction task

And this is my current Brian Reservoir Node

class BrianIFReservoirNode(mdp.Node):

def __init__(self, input_dim, output_dim, dtype, input_scaling=100, input_conn_frac=.5, dt=1, we_scaling=2, wi_scaling=.5, we_sparseness=.1, wi_sparseness=.1):

    super(BrianIFReservoirNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype)

    self.taum = 20 * brian.ms

    self.taue = 5 * brian.ms

    self.taui = 10 * brian.ms

    self.Vt = 15 * brian.mV

    self.Vr = 0 * brian.mV

    self.frac_e = .75

    self.input_scaling = input_scaling

    self.input_conn_frac = input_conn_frac

    self.dt = dt * brian.ms

    self.we_scaling = we_scaling

    self.wi_scaling = wi_scaling

    self.we_sparseness = we_sparseness

    self.wi_sparseness = wi_sparseness

    #self.eqs = brian.Equations('''dV/dt  = (I(t,i)-V)/taum : volt''',taum = self.taum)

   

    self.eqs = brian.Equations('''dV/dt  = (I(t,i)-V+ge-gi)/taum : volt

                                  dge/dt = -ge/taue : volt

                                  dgi/dt = -gi/taui : volt

                                  ''',taum = self.taum, taue = self.taue, taui = self.taui)                                      

    th = 'V > ' + str(self.Vt)

    rst = 'V = ' + str(self.Vr)

    th = 'V > 15*mV'

    rst = 'V = 0*mV'

    self.G = brian.NeuronGroup(N=output_dim, model=self.eqs, threshold=th, reset=rst,method='euler')

    self.Ge = self.G[:int(scipy.floor(output_dim * self.frac_e))]

    self.Gi = self.G[int(scipy.floor(output_dim * (1-self.frac_e))):]

    self.we = self.we_scaling * scipy.random.rand(len(self.Ge), len(self.G)) * brian.mV

    self.wi = self.wi_scaling * scipy.random.rand(len(self.Gi), len(self.G)) * brian.mV

    self.internal_conn = brian.Synapses(self.G,self.G, 'w : volt', on_pre='ge += w')

    self.internal_conn.connect()

    self.Ce = brian.Synapses(self.Ge,self.G,'w : volt',on_pre='ge += w')

    srcCe,trgCe = self.we.nonzero()

    self.Ce.connect(p=self.we_sparseness,i=srcCe,j=trgCe)

    self.Ci = brian.Synapses(self.Gi,self.G,'w : volt',on_pre = 'gi += w')

    srcCi,trgCi = self.we.nonzero()

    self.Ci.connect(p = self.wi_sparseness,i=srcCi,j=trgCi)

    self.Mv = brian.StateMonitor(self.G, 'V', record=True)

    self.Ms = brian.SpikeMonitor(self.G, record=True)

    self.w_in = self.input_scaling * (scipy.random.rand(self.output_dim, self.input_dim)) * (scipy.random.rand(self.output_dim, self.input_dim) < self.input_conn_frac)

    self.network = brian.Network(self.G, self.Ce, self.Ci, self.Ge, self.Gi, self.Mv, self.Ms)

    self.network.store(filename="D:\\DoctoratAn1Sem1\\Articole_3rd_Article\\reservoir.pk")

def is_trainable(self):

    return False

def is_invertible(self):

    return False

def _get_supported_dtypes(self):

    return ['float32', 'float64']

def _execute(self, x):

    self.network.restore(filename = "D:\\DoctoratAn1Sem1\\Articole_3rd_Article\\reservoir.pk")

    I = brian.TimedArray(100 * scipy.dot(x, self.w_in.T) * brian.mV, dt=1 * brian.ms)

    self.network.run((x.shape[0] + 1) * brian.ms)

    return self.Mv.V_[:, 0:x.shape[0]].T 

I am not very sure about the Equations, I have to review them because I has thinking to use Leaky Integrate-and-Fire ones

I need to mention that I have used another reservoir nodes (Echo State Network nodes) on the same input dataset and it worked as expected, this is why I think it should work also using this one. Or am I getting wrong the idea behind it? Is is not possible to use Brian to create a network that will be used for time-series forecasting?

If I`m not clear enough and you are so kind to help me understand, I can also share the python files with my scripts :slight_smile: Thank you very much for you time

Actually, I figured it out. The problem was from my code. That last node in the flow, mnnode was doing something weird. Removing it and leaving just the reservoir and and the readout looks ok. Now I just have to starting tuning the network until I get a good prediction. I`m thinking about working on the Equations in the beginning.

Thank you again very much for your help!

1 Like

Hi. My name is Prasad, and I am from India. I am pursuing my undergrad, and I am currently working on liquid state machines for classification-related tasks. But, even after spending days of time researching LSMs, I am unable to find a good start on how they can be implemented for classification. So, could you please assist me in my research ? You can view my github profile at “prasadgit07143 (Soma Shekar Durga Prasad Meesala) · GitHub” and you can mail me at “prasad07143.vrsec@gmail.com”. Please respond to this with your opinion.

How to implement Liquid State Machine for a classification task using Brian2 simulator in python?