Network and Simulation Issues

Description of problem

I am trying to simulate the model from Klaus Wimmer’s 2015 paper (Dynamics of sensory integration in a hierarchical network explains choice probabilities in cortical area MT) in Brian2. The model simulation code is publicly available but I am having issues translating it into Brian2.

First, the simulation code appears to be nested in an if statement, but the first line of the if statement (defaultclock.reinit()) throws up an error stating that there is ’ No attribute with name reinit’. Additionally, the penultimate line throws an error stating that Network’ object has no attribute ‘prepare’.

Minimal code to reproduce problem

if __name__ == '__main__':

    #  initialize  
    defaultclock.reinit()
    

    net = Network(Dgroups.values(), Sgroups.values(), Dconnections.values(), Sconnections.values(), 
                Dnetfunctions, update_input, C_SE1_DE1, C_SE2_DE2, C_DE1_SE1, C_DE2_SE2,
                S_DE1, S_DE2, S_SE1, S_SE2, R_DE1, R_DE2, R_SE1, R_SE2)
    net.prepare()

What you have aready tried

I removed the if statement and the net.prepare line, but the code throws up another error relating to a parameter that was defined earlier.

Expected output (if relevant)

Actual output (if relevant)

Full traceback of error (if relevant)

defaultclock.reinit() throws out this error:

KeyError                                  Traceback (most recent call last)
File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:351, in VariableOwner.state(self, name, use_units, level)
    350 try:
--> 351     var = self.variables[name]
    352 except KeyError as exc:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/variables.py:1455, in Variables.__getitem__(self, item)
   1454 def __getitem__(self, item):
-> 1455     return self._variables[item]

KeyError: 'reinit'

The above exception was the direct cause of the following exception:

KeyError                                  Traceback (most recent call last)
File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:387, in VariableOwner.__getattr__(self, name)
    386         use_units = True
--> 387     return self.state(name, use_units)
    389 except KeyError:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:353, in VariableOwner.state(self, name, use_units, level)
    352 except KeyError as exc:
--> 353     raise KeyError(f"State variable {name} not found.") from exc
    355 if use_units:

KeyError: 'State variable reinit not found.'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
Cell In[22], line 1
----> 1 defaultclock.reinit()

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/clocks.py:201, in DefaultClockProxy.__getattr__(self, name)
    199     return True
    200 from brian2.devices.device import active_device
--> 201 return getattr(active_device.defaultclock, name)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:390, in VariableOwner.__getattr__(self, name)
    387     return self.state(name, use_units)
    389 except KeyError:
--> 390     raise AttributeError(f"No attribute with name {name}")

AttributeError: No attribute with name reinit

net.prepare throws up this error: AttributeError                            Traceback (most recent call last)
Cell In[23], line 166
    162 # construct network       
    163 net = Network(Dgroups.values(), Sgroups.values(), Dconnections.values(), Sconnections.values(), 
    164               Dnetfunctions, update_input, C_SE1_DE1, C_SE2_DE2, C_DE1_SE1, C_DE2_SE2,
    165               S_DE1, S_DE2, S_SE1, S_SE2, R_DE1, R_DE2, R_SE1, R_SE2)
--> 166 net.prepare()
    167 net.run(runtime) 

AttributeError: 'Network' object has no attribute 'prepare'

The code without the if statement and net.prepare throws up this error: KeyError                                  Traceback (most recent call last)
File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:892, in Network.before_run(self, run_namespace)
    891 try:
--> 892     obj.before_run(run_namespace)
    893 except Exception as ex:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/neurongroup.py:878, in NeuronGroup.before_run(self, run_namespace)
    876 def before_run(self, run_namespace=None):
    877     # Check units
--> 878     self.equations.check_units(self, run_namespace=run_namespace)
    879     # Check that subexpressions that refer to stateful functions are labeled
    880     # as "constant over dt"

File ~/Library/Python/3.11/lib/python/site-packages/brian2/equations/equations.py:1018, in Equations.check_units(self, group, run_namespace)
   1016 external -= set(all_variables.keys())
-> 1018 resolved_namespace = group.resolve_all(external, run_namespace,
   1019                                        user_identifiers=external)  # all variables are user defined
   1021 all_variables.update(resolved_namespace)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:731, in Group.resolve_all(self, identifiers, run_namespace, user_identifiers, additional_variables)
    730 for identifier in identifiers:
--> 731     resolved[identifier] = self._resolve(identifier,
    732                                          user_identifier=identifier in user_identifiers,
    733                                          additional_variables=additional_variables,
    734                                          run_namespace=run_namespace)
    735 return resolved

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:691, in Group._resolve(self, identifier, run_namespace, user_identifier, additional_variables)
    689 # We did not find the name internally, try to resolve it in the external
    690 # namespace
--> 691 return self._resolve_external(identifier, run_namespace=run_namespace)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:814, in Group._resolve_external(self, identifier, run_namespace, user_identifier, internal_variable)
    813             error_msg = f'The identifier "{identifier}" could not be resolved.'
--> 814         raise KeyError(error_msg)
    816 elif len(matches) > 1:
    817     # Possibly, all matches refer to the same object

KeyError: 'The identifier "tau_GABA" could not be resolved.'

The above exception was the direct cause of the following exception:

BrianObjectException                      Traceback (most recent call last)
Cell In[24], line 167
    163 net = Network(Dgroups.values(), Sgroups.values(), Dconnections.values(), Sconnections.values(), 
    164               Dnetfunctions, update_input, C_SE1_DE1, C_SE2_DE2, C_DE1_SE1, C_DE2_SE2,
    165               S_DE1, S_DE2, S_SE1, S_SE2, R_DE1, R_DE2, R_SE1, R_SE2)
    166 #net.prepare()
--> 167 net.run(runtime) 

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/base.py:293, in device_override.<locals>.device_override_decorator.<locals>.device_override_decorated_function(*args, **kwds)
    291     return getattr(curdev, name)(*args, **kwds)
    292 else:
--> 293     return func(*args, **kwds)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/units/fundamentalunits.py:2462, in check_units.<locals>.do_check_units.<locals>.new_f(*args, **kwds)
   2455             error_message = (f"Function '{f.__name__}' "
   2456                              f"expected a quantitity with unit "
   2457                              f"{unit} for argument '{k}' but got "
   2458                              f"'{value}'")
   2459             raise DimensionMismatchError(error_message,
   2460                                          get_dimensions(newkeyset[k]))
-> 2462 result = f(*args, **kwds)
   2463 if 'result' in au:
   2464     if isinstance(au['result'], Callable) and au['result'] != bool:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:1007, in Network.run(self, duration, report, report_period, namespace, profile, level)
   1004 if namespace is None:
   1005     namespace = get_local_namespace(level=level+3)
-> 1007 self.before_run(namespace)
   1009 if len(all_objects) == 0:
   1010     return  # TODO: raise an error? warning?

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/base.py:293, in device_override.<locals>.device_override_decorator.<locals>.device_override_decorated_function(*args, **kwds)
    291     return getattr(curdev, name)(*args, **kwds)
    292 else:
--> 293     return func(*args, **kwds)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:894, in Network.before_run(self, run_namespace)
    892             obj.before_run(run_namespace)
    893         except Exception as ex:
--> 894             raise BrianObjectException("An error occurred when preparing an object.", obj) from ex
    896 # Check that no object has been run as part of another network before
    897 for obj in all_objects:

BrianObjectException: Error encountered with object named 'neurongroup_6'.
Object was created here (most recent call only, full details in debug log):
  File '/var/folders/5_/t7fh3bc53l33rdmffzzqhdf4nb8xt0/T/ipykernel_84865/9675799.py', line 86, in make_integration_circuit
    decisionE = NeuronGroup(NE, model=eqsE, threshold=Vt, reset=Vr, refractory=tau_refE)

An error occurred when preparing an object. (See above for original error message and traceback.)

After some more experimentation with the code, I’m fairly certain that the final error (‘The identifier “tau_GABA” could not be resolved,’ which has also occurred with every other parameter in the differential equations) is a unit error, but I am not sure how to resolve it, as all the units seem fine.
Sensory neuron equations:

eqs = '''
        dV/dt = (-gea*(V-VrevE) - gi*(V-VrevI) - (V-Vl)) * (1.0/tau) + I/Cm: volt (unless refractory)
        dgea/dt = (ye-gea)*(1.0/tau_decay)        : 1
        dye/dt = -ye*(1.0/tau_rise)               : 1     
        dgi/dt = (yi-gi)*(1.0/tau_decay)          : 1
        dyi/dt = -yi*(1.0/tau_rise)               : 1     
        I : amp
        tau : second
        Cm : farad 
        ''' 

Excitatory decision neurons:

'''
    dV/dt = (-gea*(V-VrevE) - gen*(V-VrevE)/(1.0+exp(-V/mV*0.062)/3.57) - gi*(V-VrevI) - (V-Vl)) / (tau): volt (unless refractory)
    dgea/dt = -gea/(tau_AMPA) : 1
    dgi/dt = -gi/(tau_GABA) : 1
    dspre/dt = -spre/(tau_NMDA_decay)+alpha_NMDA*xpre*(1-spre) : 1
    dxpre/dt= -xpre/(tau_NMDA_rise) : 1
    gen : 1
    tau : second''' 

Inhibitory decision neurons:

'''
        dV/dt = (-gea*(V-VrevE) - gen*(V-VrevE)/(1.0+exp(-V/mV*0.062)/3.57) - gi*(V-VrevI) - (V-Vl)) / (tau): volt (unless refractory)
        dgea/dt = -gea/(tau_AMPA) : 1
        dgi/dt = -gi/(tau_GABA) : 1
        gen : 1
        tau : second''' 

All of the parameter values can be found here: ModelDB: Hierarchical network model of perceptual decision making (Wimmer et al 2015)

Hi @luol3. Please have a look at the documentation for the Brian 1→2 conversion: Changes for Brian 1 users — Brian 2 2.5.4 documentation In particular this part which talks about networks and clocks: Networks and clocks (Brian 1 –> 2 conversion) — Brian 2 2.5.4 documentation
As it states, defaultclock.reinit does not exist anymore, and isn’t necessary. The same is true for net.prepare.

I am not quite sure why you think this is a unit error – the error message states that it cannot resolve the name, i.e. it does not find a value for tau_GABA. This is because the way Brian 2 resolves constants in the equations is different from Brian 1: In Brian 1, they were taken from the namespace at the point where the objects were created (at least most of the time – in practice there were several issues with it), whereas in Brian 2 they are taken from the namespace at the point where run(...) (or net.run(...) is called. This is mentioned here, and explained in more detail for Brian 2 here: Namespaces — Brian 2 2.5.4 documentation
In your example, a quick&dirty way to make the Brian 1 code work would be to add namespace=locals() to the construction of the NeuronGroups in the function that creates them.

Minor thing I forgot to mention:
Please enclose code and error messages in triple backticks like this (or alternatively use Ctrl+E or press the </> symbol in the editor toolbar):

```
# Python code
group = NeuronGroup(...)
```

This will prevent the formatting from going crazy with the special characters in code, and will make the code/error messages more readable in general.

1 Like

Thank you @mstimberg. I tried the fix you suggested and it worked for the neurongroups, but even after adding namespace=locals() to the synapses, the code throws out the following error. (Also, I thought namespaces only applied to parameters, yet xpre is a variable. Why is it throwing up something that looks like a namespace error?)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:892, in Network.before_run(self, run_namespace)
    891 try:
--> 892     obj.before_run(run_namespace)
    893 except Exception as ex:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/base.py:293, in device_override.<locals>.device_override_decorator.<locals>.device_override_decorated_function(*args, **kwds)
    292 else:
--> 293     return func(*args, **kwds)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/synapses/synapses.py:320, in SynapticPathway.before_run(self, run_namespace)
    318 @device_override('synaptic_pathway_before_run')
    319 def before_run(self, run_namespace):
--> 320     super(SynapticPathway, self).before_run(run_namespace)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:1137, in CodeRunner.before_run(self, run_namespace)
   1136 def before_run(self, run_namespace):
-> 1137     self.create_code_objects(run_namespace)
   1138     super(CodeRunner, self).before_run(run_namespace)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/synapses/synapses.py:343, in SynapticPathway.create_code_objects(self, run_namespace)
    333     self._pushspikes_codeobj = create_runner_codeobj(self,
    334                                                      '', # no code
    335                                                      'synapses_push_spikes',
   (...)
    340                                                      template_kwds=template_kwds,
    341                                                      run_namespace=run_namespace)
    342 self.code_objects[:] = [weakref.proxy(self._pushspikes_codeobj),
--> 343                         weakref.proxy(self.create_default_code_object(run_namespace))]

File ~/Library/Python/3.11/lib/python/site-packages/brian2/groups/group.py:1112, in CodeRunner.create_default_code_object(self, run_namespace)
   1111 else:
-> 1112     self.codeobj = create_runner_codeobj(group=self.group,
   1113                                          code=self.abstract_code,
   1114                                          user_code=self.user_code,
   1115                                          template_name=self.template,
   1116                                          name=f"{self.name}_codeobject*",
   1117                                          check_units=self.check_units,
   1118                                          additional_variables=additional_variables,
   1119                                          needed_variables=self.needed_variables,
   1120                                          run_namespace=run_namespace,
   1121                                          template_kwds=self.template_kwds,
   1122                                          override_conditional_write=self.override_conditional_write,
   1123                                          codeobj_class=self.codeobj_class
   1124                                          )
   1125 return self.codeobj

File ~/Library/Python/3.11/lib/python/site-packages/brian2/codegen/codeobject.py:341, in create_runner_codeobj(group, code, template_name, run_namespace, user_code, variable_indices, name, check_units, needed_variables, additional_variables, template_kwds, override_conditional_write, codeobj_class)
    340 for v, u_v in zip(code.values(), user_code.values()):
--> 341     _, uk, u = analyse_identifiers(v, all_variables, recursive=True)
    342     identifiers |= uk | u

File ~/Library/Python/3.11/lib/python/site-packages/brian2/codegen/translation.py:95, in analyse_identifiers(code, variables, recursive)
     94 known |= STANDARD_IDENTIFIERS
---> 95 scalar_stmts, vector_stmts = make_statements(code, variables, np.float64, optimise=False)
     96 stmts = scalar_stmts + vector_stmts

File ~/Library/Python/3.11/lib/python/site-packages/brian2/utils/caching.py:101, in cached.<locals>.cached_func(*args, **kwds)
    100     func._cache_statistics.misses += 1
--> 101     func._cache[cache_key] = func(*args, **kwds)
    102 return func._cache[cache_key]

File ~/Library/Python/3.11/lib/python/site-packages/brian2/codegen/translation.py:276, in make_statements(code, variables, dtype, optimise, blockname)
    274 if statement is None:
    275     statement = Statement(var, op, expr, comment,
--> 276                           dtype=variables[var].dtype,
    277                           scalar=variables[var].scalar)
    279 line.statement = statement

KeyError: 'xpre'

The above exception was the direct cause of the following exception:

BrianObjectException                      Traceback (most recent call last)
Cell In[6], line 169
    158 net = Network(Dgroups.values(), Sgroups.values(), Dconnections.values(), Sconnections.values(), 
    159               Dnetfunctions, update_input, C_SE1_DE1, C_SE2_DE2, C_DE1_SE1, C_DE2_SE2,
    160               S_DE1, S_DE2, S_SE1, S_SE2, R_DE1, R_DE2, R_SE1, R_SE2)
    161 #net = Network( Sgroups.values(), Sconnections.values(), 
    162 #              update_input, S_SE1, S_SE2, R_SE1, R_SE2)
    163 #namespace={VrevE: 0 * mV, VrevI: -80 * mV, tau_decay = 5.0 * ms, tau_rise = 1.0 * ms, Vl = -70.0 * mV, }
   (...)
    167 #Minor namespace problem: VrevI is different between the two populations. 
    168 #So I suppose it gets defined at a population level? 
--> 169 net.run(runtime) 
    170 #Okay, it's an error with all the time constants and has nothing to do with which part of the circuit's being run.

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/base.py:293, in device_override.<locals>.device_override_decorator.<locals>.device_override_decorated_function(*args, **kwds)
    291     return getattr(curdev, name)(*args, **kwds)
    292 else:
--> 293     return func(*args, **kwds)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/units/fundamentalunits.py:2462, in check_units.<locals>.do_check_units.<locals>.new_f(*args, **kwds)
   2455             error_message = (f"Function '{f.__name__}' "
   2456                              f"expected a quantitity with unit "
   2457                              f"{unit} for argument '{k}' but got "
   2458                              f"'{value}'")
   2459             raise DimensionMismatchError(error_message,
   2460                                          get_dimensions(newkeyset[k]))
-> 2462 result = f(*args, **kwds)
   2463 if 'result' in au:
   2464     if isinstance(au['result'], Callable) and au['result'] != bool:

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:1007, in Network.run(self, duration, report, report_period, namespace, profile, level)
   1004 if namespace is None:
   1005     namespace = get_local_namespace(level=level+3)
-> 1007 self.before_run(namespace)
   1009 if len(all_objects) == 0:
   1010     return  # TODO: raise an error? warning?

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/base.py:293, in device_override.<locals>.device_override_decorator.<locals>.device_override_decorated_function(*args, **kwds)
    291     return getattr(curdev, name)(*args, **kwds)
    292 else:
--> 293     return func(*args, **kwds)

File ~/Library/Python/3.11/lib/python/site-packages/brian2/core/network.py:894, in Network.before_run(self, run_namespace)
    892             obj.before_run(run_namespace)
    893         except Exception as ex:
--> 894             raise BrianObjectException("An error occurred when preparing an object.", obj) from ex
    896 # Check that no object has been run as part of another network before
    897 for obj in all_objects:

BrianObjectException: Error encountered with object named 'synapses_16_pre'.
Object was created here (most recent call only, full details in debug log):
  File '/var/folders/5_/t7fh3bc53l33rdmffzzqhdf4nb8xt0/T/ipykernel_25000/2283651930.py', line 192, in make_integration_circuit
    C_XI_I=Synapses(extinputI, decisionI, 'w : 1', on_pre='xpre+=w', namespace=locals())

An error occurred when preparing an object. (See above for original error message and traceback.)

Admittedly, the error message could be better, but when it complains here about xpre, it means that the post-synaptic group does not have a variable of that name. The error is raised in this line:

C_XI_I=Synapses(extinputI, decisionI, 'w : 1', on_pre='xpre+=w', namespace=locals())

Is this the translation of

extconnI = IdentityConnection(extinputI, decisionI, 'gea', weight = gextI / gLeakI)

in the Brian1 code? If yes, then it should use gea not xpre (which does not exist in the decisionI group).

A minor note: if the weight is the same for all synapses, you don’t need to define a w: 1 (which will store a weight for each synapse), you can directly write something like on_pre = 'gea += gextI / gLeakI'.

1 Like

It worked! Thank you so much for all your help!

Two final questions, if that’s okay.

  1. Is R_DE1 = PopulationRateMonitor(decisionE1, bin=20*ms) in Brian1 equal to R_DE1=PopulationRateMonitor.smooth_rate(decisionE1, window=flat, width=20*ms) in Brian2?
  2. After running the simulation, both of the PopulationRateMonitors for the decision populations have the rate at t=0 as10 kilohertz. Is that something I should worry about?

Model update: I was able to replicate the sensory and stimulus traces perfectly. However, my integration circuit neurons are firing at too high of a rate and I was unable to replicate the integration circuit figures from ModelDB: Hierarchical network model of perceptual decision making (Wimmer et al 2015)

I have no idea why this is happening. Integration circuit code is here: ModelDB: Hierarchical network model of perceptual decision making (Wimmer et al 2015) and my full replication is here

''' 
Creates the spiking network described in Wang 2002.

returns:
    groups, connections, update_nmda, subgroups
    
groups, connections, and update_nmda have to be added to the "Network" in order to run the simulation.
subgroups is used for establishing connections between the sensory and integration circuit; do not add subgroups to the "Network"

''' 
# Populations
f_E = 0.15                                   # Fraction of stimulus-selective excitatory neurons
N = 2000                                     # Total number of neurons
f_inh = 0.2                                  # Fraction of inhibitory neurons
NE = int(N * (1.0 - f_inh))                  # Number of excitatory neurons (1600)
NI = int(N * f_inh)                          # Number of inhibitory neurons (400)
N_D1 = int(f_E * NE)                         # Size of excitatory population D1
N_D2 = N_D1                                  # Size of excitatory population D2
N_DN = int((1.0 - 2.0 * f_E) * NE)           # Size of excitatory population DN

# Connectivity - local recurrent connections
w_p = 1.6                                    # Relative recurrent synaptic strength within populations D1 and D2
w_m = 1.0 - f_E * (w_p - 1.0) / (1.0 - f_E) # Relative recurrent synaptic strength of connections across populations D1 and D2 and from DN to D1 and D2
gEE_AMPA = 0.05 * nS                    # Weight of AMPA synapses between excitatory neurons
gEE_NMDA = 0.165 * nS                        # Weight of NMDA synapses between excitatory neurons
gEI_AMPA = 0.04 * nS                         # Weight of excitatory to inhibitory synapses (AMPA)
gEI_NMDA = 0.13 * nS                         # Weight of excitatory to inhibitory synapses (NMDA)
gIE_GABA = 1.3 * nS                          # Weight of inhibitory to excitatory synapses
gII_GABA = 1.0 * nS                          # Weight of inhibitory to inhibitory synapses
d = 0.5 * ms                                 # Transmission delay of recurrent excitatory and inhibitory connections
                                            
# Connectivity - external connections
gextE = 2.1 * nS                             # Weight of external input to excitatory neurons
gextI = 1.62 * nS                            # Weight of external input to inhibitory neurons

# Neuron model
CmE = 0.5 * nF                               # Membrane capacitance of excitatory neurons
CmI = 0.2 * nF                               # Membrane capacitance of inhibitory neurons
gLeakE = 25.0 * nS                           # Leak conductance of excitatory neurons
gLeakI = 20.0 * nS                           # Leak conductance of inhibitory neurons
Vl = -70.0 * mV                              # Resting potential
Vt = -50.0 * mV                              # Spiking threshold
Vr = -55.0 * mV                              # Reset potential
tau_refE = 2.0 * ms                          # Absolute refractory period of excitatory neurons
tau_refI = 1.0 * ms                          # Absolute refractory period of inhibitory neurons

# Synapse model
VrevE = 0 * mV                               # Reversal potential of excitatory synapses
VrevI = -70 * mV                             # Reversal potential of inhibitory synapses
tau_AMPA = 2.0 * ms                          # Decay constant of AMPA-type conductances
tau_GABA = 5.0 * ms                          # Decay constant of GABA-type conductances
tau_NMDA_decay = 100.0 * ms                  # Decay constant of NMDA-type conductances
tau_NMDA_rise = 2.0 * ms                     # Rise constant of NMDA-type conductances
alpha_NMDA = 0.5 * kHz                       # Saturation constant of NMDA-type conductances

# Inputs
nu_ext_1 = 2392 * Hz             # Firing rate of external Poisson input to neurons in population D1
nu_ext_2 = 2392 * Hz               # Firing rate of external Poisson input to neurons in population D2
nu_ext = 2400 * Hz                 # Firing rate of external Poisson input to neurons in population Dn and I
# External inputs
extinputE1 = PoissonGroup(N_D1, rates = nu_ext_1) 
extinputE2 = PoissonGroup(N_D2, rates = nu_ext_2)
extinputE3 = PoissonGroup(N_DN, rates = nu_ext)
extinputI = PoissonGroup(NI, rates = nu_ext)
eqsE = '''
dV/dt = (-gea*(V-VrevE) - gen*(V-VrevE)/(1.0+exp(-V/mV*0.062)/3.57) - gi*(V-VrevI) - (V-Vl)) / (tau): volt (unless refractory)
dgea/dt = -gea/(tau_AMPA) : 1
dgi/dt = -gi/(tau_GABA) : 1
dspre/dt = -spre/(tau_NMDA_decay)+alpha_NMDA*xpre*(1-spre) : 1
dxpre/dt= -xpre/(tau_NMDA_rise) : 1
gen : 1
tau : second
    '''
eqsI = '''
    dV/dt = (-gea*(V-VrevE) - gen*(V-VrevE)/(1.0+exp(-V/mV*0.062)/3.57) - gi*(V-VrevI) - (V-Vl)) / (tau): volt (unless refractory)
    dgea/dt = -gea/(tau_AMPA) : 1
    dgi/dt = -gi/(tau_GABA) : 1
    gen : 1
    tau : second
    '''

# Set up the integration circuit
decisionE = NeuronGroup(NE, model=eqsE, namespace=locals(), threshold='V>Vt', reset='V=Vr', refractory=tau_refE)
decisionI = NeuronGroup(NI, model=eqsI, namespace=locals(), threshold='V>Vt', reset='V=Vr', refractory=tau_refI)
decisionE.tau = CmE / gLeakE
decisionI.tau = CmI / gLeakI      
decisionE1 = decisionE[:N_D1]
decisionE2 = decisionE[N_D1:N_D1+N_D2]
decisionE3 = decisionE[N_D1+N_D2:]
#DE1 to DE1
C_DE1_DE1_AMPA = Synapses(decisionE1, decisionE1, 'w: 1', on_pre='gea+=w', namespace=locals())  #Pay attention to units. 
C_DE1_DE1_AMPA.connect() #Connect_full and connect_one_to_one are not the same thing! 
C_DE1_DE1_AMPA.w=w_p * gEE_AMPA / gLeakE
C_DE1_DE1_AMPA.delay=d
#DE2 to DE2
C_DE2_DE2_AMPA = Synapses(decisionE2, decisionE2, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE2_DE2_AMPA.connect()
C_DE2_DE2_AMPA.w=w_p * gEE_AMPA / gLeakE
C_DE2_DE2_AMPA.delay=d
#DE1 to DE2
C_DE1_DE2_AMPA = Synapses(decisionE1, decisionE2, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE1_DE2_AMPA.connect()
C_DE1_DE2_AMPA.w=w_m * gEE_AMPA / gLeakE
C_DE1_DE2_AMPA.delay=d
#DE2 to DE1 
C_DE2_DE1_AMPA = Synapses(decisionE2, decisionE1, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE2_DE1_AMPA.connect()
C_DE2_DE1_AMPA.w=w_m * gEE_AMPA / gLeakE
C_DE2_DE1_AMPA.delay=d
#DE3 to DE1  
C_DE3_DE1_AMPA = Synapses(decisionE3, decisionE1, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE3_DE1_AMPA.connect()
C_DE3_DE1_AMPA.w=w_m * gEE_AMPA / gLeakE
C_DE3_DE1_AMPA.delay=d
#DE3 to DE2
C_DE3_DE2_AMPA = Synapses(decisionE3, decisionE2, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE3_DE2_AMPA.connect()
C_DE3_DE2_AMPA.w=w_m * gEE_AMPA / gLeakE
C_DE3_DE2_AMPA.delay=d
#DE1 to DE3
C_DE1_DE3_AMPA = Synapses(decisionE1, decisionE3, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE1_DE3_AMPA.connect()
C_DE1_DE3_AMPA.w=gEE_AMPA / gLeakE
C_DE1_DE3_AMPA.delay=d
#DE2 to DE3
C_DE2_DE3_AMPA = Synapses(decisionE2, decisionE3, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE2_DE3_AMPA.connect()
C_DE2_DE3_AMPA.w=gEE_AMPA / gLeakE
C_DE2_DE3_AMPA.delay=d
#DE3 to DE3
C_DE3_DE3_AMPA = Synapses(decisionE3, decisionE3, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE3_DE3_AMPA.connect()
C_DE3_DE3_AMPA.w= gEE_AMPA / gLeakE
C_DE3_DE3_AMPA.delay=d
#DE to DI
C_DE_DI_AMPA = Synapses(decisionE, decisionI, 'w: 1', on_pre='gea+=w', namespace=locals())   
C_DE_DI_AMPA.connect()
#C_DE_DI_AMPA.connect(j='i')
C_DE_DI_AMPA.w= gEI_AMPA / gLeakI
C_DE_DI_AMPA.delay=d
#NMDA
selfnmda = Synapses(decisionE, decisionE, 'w : 1', on_pre='xpre+=w', namespace=locals())
selfnmda.connect(j='i')
selfnmda.w=1.0
selfnmda.delay=d
E1_nmda = asarray(decisionE1.spre)
E2_nmda = asarray(decisionE2.spre)
E3_nmda = asarray(decisionE3.spre)         
E1_gen = asarray(decisionE1.gen) 
E2_gen = asarray(decisionE2.gen)
E3_gen = asarray(decisionE3.gen)
I_gen = asarray(decisionI.gen)
@network_operation(when='start') #NMDA calculations 
def update_nmda():
    sE1 = sum(E1_nmda)
    sE2 = sum(E2_nmda)
    sE3 = sum(E3_nmda)
    E1_gen[:] = gEE_NMDA / gLeakE * (w_p*sE1 + w_m*sE2 + w_m*sE3)
    E2_gen[:] = gEE_NMDA / gLeakE * (w_m*sE1 + w_p*sE2 + w_m*sE3)
    E3_gen[:] = gEE_NMDA / gLeakE * (sE1 + sE2 + sE3)
    I_gen[:] = gEI_NMDA / gLeakI * (sE1 + sE2 + sE3)  

#DI to DE
C_DI_DE = Synapses(decisionE, decisionI, 'w: 1', on_pre='gi+=w', namespace=locals()) 
C_DI_DE.connect()
C_DI_DE.w=gIE_GABA / gLeakE
C_DI_DE.delay=d
#DI to DI
C_DI_DI = Synapses(decisionI, decisionI, 'w: 1', on_pre='gi+=w', namespace=locals()) 
C_DI_DI.connect()
C_DI_DI.w=gII_GABA / gLeakI
C_DI_DI.delay=d
#X1 to E1
C_X1_E1=Synapses(extinputE1, decisionE1, 'w : 1', on_pre='gea+=w', namespace=locals())
C_X1_E1.connect(j='i')
C_X1_E1.w=gextE / gLeakE #BE CAREFUL! 
C_X1_E1.delay=d
#X2 to E2 
C_X2_E2=Synapses(extinputE2, decisionE2, 'w : 1', on_pre='gea+=w', namespace=locals())
C_X2_E2.connect(j='i')
C_X2_E2.w=gextE / gLeakE
C_X2_E2.delay=d
#X3 to E3 
C_X3_E3=Synapses(extinputE3, decisionE3,  'w : 1', on_pre='gea+=w', namespace=locals())
C_X3_E3.connect(j='i')
C_X3_E3.w=gextE / gLeakE
C_X3_E3.delay=d
#XI to I
C_XI_I=Synapses(extinputI, decisionI, 'w : 1', on_pre='gea+=w', namespace=locals())
C_XI_I.connect(j='i')
C_XI_I.w=gextI / gLeakI
C_XI_I.delay=d
groups = {'DE': decisionE, 'DI': decisionI, 'DX1': extinputE1, 'DX2': extinputE2, 'DX3': extinputE3, 'DXI': extinputI}
subgroups = {'DE1': decisionE1, 'DE2': decisionE2, 'DE3': decisionE3}  
connections = {'selfnmda': selfnmda,
               'extconnE1': C_X1_E1, 'extconnE2': C_X2_E2, 'extconnE3': C_X3_E3, 'extconnI': C_XI_I, 
               'C_DE1_DE1_AMPA': C_DE1_DE1_AMPA, 'C_DE2_DE2_AMPA': C_DE2_DE2_AMPA, 'C_DE1_DE2_AMPA': C_DE1_DE2_AMPA,'C_DE2_DE1_AMPA': C_DE2_DE1_AMPA, 'C_DE3_DE1_AMPA': C_DE3_DE1_AMPA, 'C_DE3_DE2_AMPA': C_DE3_DE2_AMPA, 'C_DE1_DE3_AMPA': C_DE1_DE3_AMPA, 'C_DE1_DE2_AMPA': C_DE1_DE2_AMPA, 'C_DE3_DE3_AMPA': C_DE3_DE3_AMPA,'C_DE_DI_AMPA': C_DE_DI_AMPA, 'C_DI_DE': C_DI_DE, 'C_DI_DI': C_DI_DI }
return groups, connections, update_nmda, subgroups

Found the error. C_DI_DE = Synapses(decisionE, decisionI, ‘w: 1’, on_pre=‘gi+=w’, namespace=locals())
was accidentally written with the source and target neurons flipped. Code is fixed now.

Even after the incorrect connection was fixed, both of the PopulationRateMonitors for the decision populations still have the rate at t=0 as10 kilohertz (every single neuron in populations D1 and D2 spike at t=0) and I don’t know why that’s still happening. Sorry for all the spam.

Glad you worked things out! BTW: It would be great to have this example as an addition to the examples in the Brian2 documentation, please feel free to open a pull request for it (our development process is described in the developer documentation, but let me know if you don’t have any experience with git/github but would still like to contribute the example).

This is most likely because of a missing initialization of the membrane potential of the neurons. By default, all uninitialized variables start at 0, and 0mV for the membrane potential is far above the firing threshold, so the neurons will spike immediately. Some simulations simply ignore everything that happens in the first 100ms or so to not have to deal with these kind of transient effects, but a cleaner solution would be to add something like

decisionE.v = Vl
decisionI.v = Vl

after the definition of the NeuronGroups, to have all neurons start at the leak reversal potential, or – even better, something like

decisionE.v =" Vl + rand()*(Vt - Vl)"
decisionI.v = "Vl + rand()*(Vt - Vl)"

which sets each neuron’s membrane potential to a random value between the leak reversal potential and the threshold. This makes sure that the activity is heterogeneous right from the start.

The initial condition for the decision neurons is here:

decisionE.V = decisionE.V + rand(decisionE.__len__()) * 2 * mV
  decisionI.V = decisionI.V + rand(decisionI.__len__()) * 2 * mV

But that still seems very high.

Hmm, this is from the Brian1 code, right? This is wrong, since it refers to the current value of V on the right-hand-side (i.e. 0mV for an uninitialized membrane potential). So this would give initial values between 0mV and 2mV which does not make a lot of sense (all values are above threshold). So maybe the issue that all neurons spike in the first time step was actually happening in the Brian1 code as well? In that case, if you want to replicate it perfectly, then you should probably leave it as it is – but if you want to do what I think the original author wanted to do, then it should use Vl (or just a hardcoded -70*mV if you don’t have access to Vl at that point) instead of decisionE.V on the right-hand side. And again a minor thing:
While (both in Brian1 and in Brian2) you can use

decisionE.V = Vl + rand(decisionE.__len__()) * 2 * mV

the more “idiomatic” way to write this in Brian 2 would be

decisionE.V = 'Vl + rand()*2*mV'

(when running this repeatedly, there’s also a minor difference regarding the two solutions regarding seeds and so on, but for a single run it is basically equivalent).

I emailed the original author to see what he intended with that line of code. Also, I’m still not sure if the correct translation of R_DE1 = PopulationRateMonitor(decisionE1, bin=20*ms) into Brian2 is

R_DE1 = PopulationRateMonitor(decisionE1), R_DE1.smooth_rate(window=flat, width=20*ms)

I actually happen to know Klaus quite well (we did our PhD together) :blush: I am fairly sure that this specific initialization line is a mistake, but let us know what he says :wink:

I assume you mean to define the monitor as

R_DE1 = PopulationRateMonitor(decisionE1)

and then after the simulation access the results as

R_DE1.smooth_rate(window='flat', width=20*ms)

It is not exactly the same as the Brian 1 code (Brian 1 also had a smooth_rate function with the window='flat' option, which isn’t used here). The Brian 1 code bins all the rates each 20ms, so you have one value for the population rate every 20ms. This isn’t available in Brian 2 anymore, but it is also not the best approach, since the results are fragile with respect to the bin boundaries (e.g. having high firing rates at 19.7ms and 19.9ms result in very different binned rates from having high firing at 19.9ms and 20.1ms). The approach with smoothed_rate instead has a sliding window.
So if you imagine a firing rate over 6 time steps of [10, 20, 30, 0, 20, 40], binning with a bin size of 3 would give you the rates [20, 20] as an result (average firing rates in [10, 20, 30] and [0, 20, 40]), whereas the flat window would give you [15, 20, 16.66, 16.66, 30] (averages over the bin and its neighbors for each time step; although I don’t know off the top of my head whether it deals with the borders like that or whether it pads the rates with zeros). Hope that makes the difference clear!

@mstimberg It does. So the closest thing to the Brian1 code would be just taking the firing rate stored by R_DE1 every 20ms and plotting it?

Also, Wimmer said that the initialization line was a mistake and the neurons are supposed to have initial voltages between Vreset (-55mv) and Vthreshold (-50mv), so I implemented that.

EDIT: after fixing the current injection code, when dt=1ms, the sensory neurons seem overactive and none of the decision neurons fire. When dt=0.1ms, the decision neurons do fire. (I don’t think the current injection code is a problem. I also have the same random seeds as the modelDB code.)

My rate traces:


Sensory

Integration

Wimmer’s rate traces:


Sensory

Integration

His seem smoothed but how do I find the smoothing mechanism? (they were plotted in Brian with the plot function and no other inputs)

Not exactly (I should probably add this to the Brian 1→2 documentation): the exact equivalent of bin=20*ms in Brian 1, would be to do this (assuming the total runtime is a multiple of 20ms):

monitor = PopulationRateMonitor(neurons)  # records the rate every dt
# … do simulation
bins = int(20*ms/defaultclock.dt)
binned_t = monitor.t[::bins]
binned_rate = np.reshape(rate, (-1, bins)).mean(axis=1)  # Average over values in each bin

To make this clearer (and to convince myself that I am not making any mistake myself), here’s an example with Poisson inputs with a sinusoidal rate:

source code
from brian2 import *

input_spikes = PoissonGroup(100, rates='50*Hz + sin(2*pi*5*Hz*t)*50*Hz')
spike_mon = SpikeMonitor(input_spikes)
rate_mon = PopulationRateMonitor(input_spikes)
run(400*ms)

# Bin rates
bins = int(20*ms/defaultclock.dt)
binned_t = rate_mon.t[::bins]
binned_rate = np.reshape(rate_mon.rate/Hz, (-1, bins)).mean(axis=1)

fig, axs = plt.subplots(4, 1, sharex=True)
axs[0].plot(spike_mon.t/ms, spike_mon.i, '.k')
axs[1].plot(rate_mon.t/ms, rate_mon.rate/Hz, label='raw rate')
axs[1].plot(rate_mon.t[::bins]/ms, rate_mon.rate[::bins]/Hz, 'o-', label='raw rate (sampled)')
axs[1].plot(rate_mon.t/ms, (50*Hz + np.sin(2*np.pi*5*Hz*rate_mon.t)*50*Hz)/Hz, color='gray', lw=2, alpha=0.5, label='_hidden')
axs[1].legend()
axs[2].plot(binned_t/ms, binned_rate, 'o-', ms=3, label='binned rate', color='C2')
axs[2].plot((binned_t + 10*ms)/ms, binned_rate, 'o-', ms=3, label='binned rate (shifted)', color='C3')
axs[2].plot(rate_mon.t/ms, (50*Hz + np.sin(2*np.pi*5*Hz*rate_mon.t)*50*Hz)/Hz, color='gray', lw=2, alpha=0.5, label='_hidden')
axs[2].legend()
axs[3].plot(rate_mon.t/ms, rate_mon.smooth_rate(window='flat', width=20*ms)/Hz,
               label='flat', color='C4')
axs[3].plot(rate_mon.t/ms, rate_mon.smooth_rate(window='gaussian', width=20*ms)/Hz,
               label='Gaussian', color='C5')
axs[3].plot(rate_mon.t/ms, (50*Hz + np.sin(2*np.pi*5*Hz*rate_mon.t)*50*Hz)/Hz, color='gray', lw=2, alpha=0.5, label='_hidden')
axs[3].legend()
axs[3].set_xlabel('Time (ms)')

plt.show()

The lower three plots each show the underlying rate of the Poisson processes in gray for comparison. As you can see, the raw rate is very noisy, and sampling from it every 20ms is very unlikely to work (as in my example). The “binned rate” in the following panel is the equivalent of the Brian 1 monitor with bin=20*ms. As you can see, the line is shifted with respect to the underlying rate if plotted like this, since the value at t=0ms stores the values for [0, 20ms[, the value at t=20ms stores the values for [20ms, 40ms[, etc. A better way to plot it is to shift the rate by 10ms (as in the “binned rate (shifted)” plot), or to use bars/horizontal lines showing the same value for each 20ms interval. Finally, the last panel shows using the smooth_rate function with a flat and a Gaussian window (the Gaussian window size is maybe not the best comparison here, since I used window=20*ms which for a Gaussian corresponds to the standard deviation of the Gaussian). Hope that makes all this clearer.

You cannot expect the random seed to have the same effect on Brian 1 and Brian 2, since this would only work if all random numbers of the simulation were produced in the exact same order. Even within Brian 2, reproducibility depends on details such as the code generation method (see Compatibility and reproducibility — Brian 2 2.5.4 documentation).

@mstimberg Thank you!

The binned sensory trace is fine. The binned integration trace has the neurons fire less often than those in both the figure on modelDB and the figure in Wimmer’s paper which was generated under the same parameters. (my integration/decision neurons fire at rates around 1hz and don’t seem to be showing any integration behavior, his neurons fire at rates around 20-40 hz)

I think it’s because the NDMA-based synapses aren’t working and have a constant value of zero. Code below.

selfnmda = Synapses(decisionE, decisionE, 'w : 1', on_pre='xpre+=w', namespace=locals())
    selfnmda.connect(j='i')
    selfnmda.w=1.0
    selfnmda.delay=d
    E1_nmda = asarray(decisionE1.spre)
    E2_nmda = asarray(decisionE2.spre)
    E3_nmda = asarray(decisionE3.spre)         
    E1_gen = asarray(decisionE1.gen) 
    E2_gen = asarray(decisionE2.gen)
    E3_gen = asarray(decisionE3.gen)
    I_gen = asarray(decisionI.gen)
    @network_operation(when='start') #NMDA calculations. Is this what isn't working? 
    def update_nmda():
        sE1 = sum(E1_nmda)
        sE2 = sum(E2_nmda)
        sE3 = sum(E3_nmda)
        E1_gen[:] = gEE_NMDA / gLeakE * (w_p*sE1 + w_m*sE2 + w_m*sE3)
        E2_gen[:] = gEE_NMDA / gLeakE * (w_m*sE1 + w_p*sE2 + w_m*sE3)
        E3_gen[:] = gEE_NMDA / gLeakE * (sE1 + sE2 + sE3)
        I_gen[:] = gEI_NMDA / gLeakI * (sE1 + sE2 + sE3)

The differential equations defining xpre and spre are below and are part of the neurongroup equations in the code. Would moving them to the synapse group help?

dspre/dt = -spre/(tau_NMDA_decay)+alpha_NMDA*xpre*(1-spre) : 1
 dxpre/dt= -xpre/(tau_NMDA_rise) : 1

Oh, indeed, this is an issue with the update_nmda network operation. In Brian 2, you actually wouldn’t write it like this anymore (instead you’d use a “summed variable”) – but let’s leave this aside for the moment. The problem is in the asarray conversion, which makes use of the fact that Brian 1 returns a view into the array storing the actual values, so updating the values via this view updates the actual values stored in the groups. In contrast, this function will return a copy in Brian 2, since the respective groups are subgroups. I am not 100% sure, but possibly the Brian 1 behavior was actually better/more consistent here. Regardless, I wouldn’t recommend writing the code like this neither in Brian 1 nor in Brian 2, since it relies on technical details of how the data is stored. Instead, please try this version that directly accesses the respective neuronal variables:

# Replaces everything starting from E1_nmda = asarray(...)
@network_operation(when='start') #NMDA calculations
def update_nmda():
   sE1 = sum(decisionE1.spre[:])
   sE2 = sum(decisionE2.spre[:])
   sE3 = sum(decisionE3.spre[:])
   decisionE1.gen[:] = gEE_NMDA / gLeakE * (w_p*sE1 + w_m*sE2 + w_m*sE3)
   decisionE2.gen[:] = gEE_NMDA / gLeakE * (w_m*sE1 + w_p*sE2 + w_m*sE3)
   decisionE3.gen[:] = gEE_NMDA / gLeakE * (sE1 + sE2 + sE3)
   decisionI.gen[:] = gEI_NMDA / gLeakI * (sE1 + sE2 + sE3)

PS: If you get this to work and to reproduce the Brian 1 results reasonably well, would you maybe be interested into collaborating with me, turning the whole story of going from the Brian 1 to the Brian 2 code into a blog post? There are still things to do to make the example use more idiomatic Brian 2 style (e.g. by replacing the network operations), and I think it could be a very instructive example for others struggling to convert Brian 1 examples.

1 Like