Using a variable before definition in user-defined function

I’m hoping to define a user-defined cpp function first and apply it to multiple brian2 object later. In the following code, all the user-function related code are pre-defined, i.e.

  # read the cpp code 
  def read_spike_rate_code(spike_mon_obj):
      with open(os.path.join(current_dir,'spike_rate.cpp')) as f:
          spike_rate_code = f.read()

      return spike_rate_code.replace('%%SPIKECOUNT%%',
          device.get_array_name(spikemon.variables['count']))

  # defined user function 
  @implementation('cpp', spike_rate_code,
                  dependencies={'ta2d': ta2d})
  @check_units(t=second, i=1, result=1)
  def spike_rate(scale, t, i):
      raise NotImplementedError('use standalone mode')

Then I tried to use it in this way:

   spike_rate_code = read_spike_rate_code(spikemon) #!!!!!
   ........
   input_neurons.run_regularly('rate=spike_rate(scale, t, i)*Hz', dt=0.2*ms)

I got the following error because the cpp code need to be defined in the decorator:

  @implementation('cpp', spike_rate_code,
  NameError: name 'spike_rate_code' is not defined

Is it possible to bypass this issue? The complete code are attached. Thanks for the help
mm.py (1.1 KB)
spike_rate.cpp.txt (238 Bytes)

Hi @DavidKing2020 . Just to make sure that I understand correctly: do you want to use the same code (i.e. referring to the same spike monitor) in several place, or do you want to have multiple instantiations of the spike_rate function, each using a different spike monitor?

If you only need a single implementation of spike_rate, then one option is simply to not define it before the other objects, but after them. The spike_rate function does not yet need to be available when you define the run_regularly function, it only needs to be defined before you call run.

Hi Marcel, I’m referring to the 2nd scenario, for example, checking spike rate of different neuron groups, each having its own spikemonitor.

I see. In this case I think the simplest solution would be to 1) define one function for each group and 2) define all of them in the end, before the run function. If it is too much of a mess to not define the functions in the beginning, you could also manually define variable names instead of using device.get_array_name: the name of the spike monitor’s count variable is brian::_array_spikemonitor_count, with spikemonitor the name of the SpikeMonitor object. If you manually define these names with SpikeMonitor(..., name='myname') for each monitor, you can directly put this name into the code.

Note that if all you need from the spike monitor is the spike count (we probably discussed this before, apologies :slight_smile: ), then it would probably be easier to just have this information directly represented in the neuron equations, with something like spike_count : integer in the equations, and a spike_count += 1 as part of the reset.

The manual option, e.g. brian::_array_spikemonitor_count, is the best option for my application. Are the variables like “brian::_array_spikemonitor_count” included in a list? For example, in the c++ code, one can search the variable from an existing list and pick the one that’s needed like the following:

     #define getName(var) #var

     string target_var_name = "brian::_array_smon5_count";
     for(int i = 0; i<length_of_var_list; i++) {
        if(getName(var_list[i]) == target_var_name)
            int *count = var_list[i];
     }

where “length_of_var_list” is the length of the array and “var_list” is the list containing all global variables.

No, currently there is no list storing all the variables. But I am not quite sure I get your use case here – why put brian::_array_smon5_count into a string first, when you can directly use it as a variable name?

The actual function I’m trying to implement is to renormalize the weight of multiple synapse groups everytime the stimulus is switched. The same function is needed for all synapse groups. I was hoping to have the synapse group name as an argument and do the renormalization. For example, assuming the function is called ‘normalized_weight(syn_name)’ and defined as:

#define getName(var) #var
double * normalized_weight(syn_name) {
      for(int i = 0; i<length_of_var_list; i++) {
             string var_name  = "brian::_array_"+syn_name+"_count";
             if(getName(var_list[i]) == var_name  )
              .........
      }
      return new_weight;
}

when using this function,

    for  syn  in syn_groups: 
          syn.run_regularly('w = normalized_weight(name) ', dt = sim_time)   

The “brian::_array_spikemon_count” is unique for each synapse group. To directly use it in the “normalized_weight”, one need to create a separate “normalized_weight” function for each synapse group. This can be a bit messy in case of large number of synapse groups.

One probably can use std::map to connect the var_name and var and solve the problem in c++. but cuda implementation could be a bit inconvenient. It would be nice if we have an easier solution other than this way.

I see what you mean, but including w = normalized_weight(name) in the run_regularly operation will not work in any case, since Brian’s code syntax does not know strings…

But maybe let’s step back a bit, related to what I mentioned above: maybe there’s a way to do things by using more of Brian’s built-in capabilities with less hand-written code? When you normalize the synapses, what spike count information do you need exactly? The number of spikes for a single neuron, for several neurons that the synapse is connected to, or the total number of spikes in a group? And how complicated is the normalization, i.e. do you really need to write C++ code to calculate it, or could it also be written as a mathematical expression?

It would be really nice to use the built-in capabilities from Brian2. The following is the actual python function I used for normalization in run time mode:

 def get_matrix_from_synapse_and_normalize(syn, source_dim, target_dim, norm_scale):
     matrix = np.full((source_dim, target_dim), np.nan)
     matrix[syn.i[:], syn.j[:]] = syn.w[:]

     n_pres = np.sum(~np.isnan(matrix), axis = 0) # count the number of pre-neuron to each post in the marix
     tot_weight = np.nansum(matrix, axis=0) # total weight for each post-neuron
     scale = norm_scale*np.divide(n_pres, tot_weight)
     matrix *= scale
     flattened = matrix.flatten()

     return flattened[~np.isnan(flattened)]

Among the arguments, syn is the a synapse, source_dim and target_dim represent the dimension of the connection matrix, norm_scale is the scaling number. The purpose of this is to avoid weights reach maximum before going through the whole sample and not penalizing those sparse large weight synapses in the meantime.

Thanks for the details. This should indeed be possible to implement without C++ code. The total number of pre-synaptic neurons for each post-synaptic neuron is already available as N_incoming (see Synapses — Brian 2 2.5.1 documentation), so that leaves the total weight for each post-synaptic neuron. We can store this as a variable in the post-synaptic group:

eqs= '''...
total_weights : siemens  # or whatever units your weights have
'''

Let’s say we then want to normalize the synapses every 100ms, we can do this with three “chained” run_reguarly operations, which

  1. Reset the stored total weight
  2. Update the total weight
  3. Normalize the weights based on the previously calculated value

This should look something like this:

group.run_regularly('total_weights = 0*nS',
                    dt=100*ms, when='end', order=1)
synapses.run_regularly('total_weights_post += w',
                       dt=100*ms, when='end', order=2)
synapses.run_regularly('w *= norm_scale*(N_incoming / total_weights_post)',
                       dt=100*ms, when='end', order=3)

I did not try running the code, but I don’t see why it shouldn’t work.

Thanks for the further inputs. The suggested algorithm indeed works. I was expecting the following two commands are identical but they produce the different results. The one produce the correct result is:

   S.run_regularly('total_weight = 0', dt = 10*ms, when='end', order=1)
   S.run_regularly('total_weight_post +=w', dt = 10*ms, when='end', order=2)
   S.run_regularly('w *= 0.1*N_incoming/total_weight_post', dt = 10*ms, when='end', order=3)

image

and the one create the incorrect results is:

  S.run_regularly('''
              total_weight = 0
              total_weight_post +=w
              w *= 0.1*N_incoming/total_weight_post
              ''', dt = 10*ms, when='end')

image

I need to either add “_post” to all “total_weight” or remove “_post” from all “total_weight” to produce the correct results. Here is the macro to reproduce above results:

  from brian2 import *
  %matplotlib inline

  start_scope()

  eqs = '''
  dv/dt = (I-v)/tau : 1
  I : 1
  tau : second
  total_weight: 1
  '''
  G = NeuronGroup(3, eqs, threshold='v>1', reset='v = 0', method='exact')
  G.I = [2, 0, 0]
  G.tau = [10, 100, 100]*ms

  # Comment these two lines out to see what happens without Synapses
  S = Synapses(G, G, 'w : 1', on_pre='v_post += w')
  S.connect()
  S.w = '1'

  ### correct results
  #S.run_regularly('total_weight = 0', dt = 10*ms, when='end', order=1)
  #S.run_regularly('total_weight +=w', dt = 10*ms, when='end', order=2)
  #S.run_regularly('w *= 0.1*N_incoming/total_weight', dt = 10*ms, when='end', order=3)

  #S.run_regularly('''
  #                  total_weight = 0
  #                  total_weight +=w
  #                  w *= 0.1*N_incoming/total_weight
  #                  ''', dt = 10*ms, when='end')

  ### wrong results
  S.run_regularly('''
                    total_weight = 0
                    total_weight_post +=w
                    w *= 0.1*N_incoming/total_weight_post
                    ''', dt = 10*ms, when='end')


  print(S.N_incoming)

  M = StateMonitor(G, 'v', record=True)
  MS = StateMonitor(S, 'w', record = [0])

  run(50*ms)

  plot(MS.t/ms, MS.w[0])
  xlabel('Time (ms)')
  ylabel('w')

There was a reason why I wrote this in three run_regularly operations instead of in one :upside_down_face:
Each run_regularly operation is like a loop over neurons or synapses, and there is a difference between (my code):

  • loop over all post-synaptic neurons:
    • set total_weights to 0
  • loop over all synapses:
    • add weight to post-synaptic total_weights
  • loop over all synapses:
    • normalize weight based on total_weights

and (your code):

  • loop over all synapses:
    • set post-synaptic total_weights to 0
    • add weight to post-synaptic total_weights
    • normalize weight based on total_weights

In the second variant, it will create a new “total” weight for each synapse and use this to normalize a synapse. Several synapses targetting the same post-synaptic neuron will not combine their weights.
The three run_regularly operations therefore need to be distinct, since you need to first reset the total_weights variable for all neurons, then calculate total_weights for all neurons, and only then you can normalize the weights.

A minor issue:
Instead of

S.run_regularly('total_weight = 0', dt = 10*ms, when='end', order=1)

it would be more efficient to use

G.run_regularly('total_weight = 0', dt = 10*ms, when='end', order=1)

Since otherwise, you loop over all synapses instead of over all neurons, and set the same total_weight variable to 0 several times.

I see. Here is a follow up question on this topic. According to the manual, variable names that are not referring to a synaptic variable are automatically understood to be post-synaptic variables. if understand it correctly, this means the following two sets of commands should be identical.

commands#1

S.run_regularly('''
                total_weight = 0
                total_weight +=w
                w *= 0.1*N_incoming/total_weight
                ''', dt = 10*ms, when='end')

commands#2

   S.run_regularly('''
                total_weight = 0
                total_weight_post +=w
                w *= 0.1*N_incoming/total_weight_post
                ''', dt = 10*ms, when='end')

The command#1 somehow produce the expected results while commands#2 doesn’t. Further inputs would be appreciated.

Yes, total_weight and total_weight_post should be exchangeable in your run_regularly operation. This is indeed the case, but there appears to be a bug in the C++/Cython code generation if you mix the two names (as in your “commands#2”) which leads to the total_weight = 0 line effectively being ignored. If you also use total_weight_post in the first line, then the results will be identical to “commands#1”.
But again, with either name, this is not the correct way of doing the normalization. You are only getting the “expected results” here because your weights are all the same. Here’s an adapted version of your code with different weights, which simulates the two formulations side by side:

Code
from brian2 import *
set_device('cpp_standalone', directory='output_post')
start_scope()

eqs = '''
total_weight: 1
'''
G1 = NeuronGroup(3, eqs)
S1 = Synapses(G1, G1, 'w : 1')
S1.connect(i=[0, 0, 0, 0, 0, 0], j=[0, 1, 1, 2, 2, 2])
S1.w = np.arange(6) + 1

G2 = NeuronGroup(3, eqs)
S2 = Synapses(G2, G2, 'w : 1')
S2.connect(i=[0, 0, 0, 0, 0, 0], j=[0, 1, 1, 2, 2, 2])
S2.w = np.arange(6) + 1

### correct results
G1.run_regularly('total_weight = 0', dt = 10*ms, when='end', order=1)
S1.run_regularly('total_weight +=w', dt = 10*ms, when='end', order=2)
S1.run_regularly('w *= 0.1*N_incoming/total_weight', dt = 10*ms, when='end', order=3)

### incorrect
S2.run_regularly('''
                total_weight = 0
                total_weight +=w
                w *= 0.1*N_incoming/total_weight
                ''', dt = 10*ms, when='end')

MS1 = StateMonitor(S1, 'w', record=np.arange(6))
MS2 = StateMonitor(S2, 'w', record=np.arange(6))

run(10.1*ms)
for j in range(3):
    print(f'→ neuron {j}: S1.w={S1.w[:, j]}; S2.w={S2.w[:, j]}')
    
fig, axs = plt.subplots(1, 2, sharey=True)
axs[0].plot(MS1.t/ms, MS1.w.T)
axs[0].set_xlabel('Time (ms)')
axs[0].set_ylabel('w')
axs[1].plot(MS2.t/ms, MS2.w.T)
axs[1].set_xlabel('Time (ms)')
axs[1].set_ylabel('w')
show()

This gives

Figure_1

and prints:

→ neuron 0: S1.w=[0.1]; S2.w=[0.1]
→ neuron 1: S1.w=[0.08 0.12]; S2.w=[0.2 0.2]
→ neuron 2: S1.w=[0.08 0.1  0.12]; S2.w=[0.3 0.3 0.3]

As you can see, the S1 weights, using the three run_regularly operations, are doing the normalization, so that the average incoming weight for each post-synaptic cell is the same, while keeping the relative weights intact. The S2 weights, following your “commands#1”, do something that wouldn’t be useful as a normalization (weights targeting a single cell are all the same, and grow bigger if there are more connections).
Hope that clears things up.

1 Like

Thank so much for the very helpful inputs in this thread. That greatly simplified my code and saves days of work :wink:

1 Like