# Access synaptic and neuron variables in network_operation function implemented in numpy/cython/standalone

I’m implementing Wu et al 2020 model in Brian. The model is well defined in Methods and can be pretty easily coded up, except Heterosynaptic and Meta plasticities. Both of these synaptic weight adjustments required update every 1s or 30s of model time.

I have a few questions:

1. how to access synaptic and neuronal dynamic valuables, like A^{-}, in the network_operation function?
2. how to implement network_operation function for 3 different running modes: numpy, cython, and standalone C-code?
3. may I have more than one network_operation function?
4. can I use network_operation function with OpenMP multithreading?

Hi @rth. I think there is a bit of confusion between user-defined functions and network_operation. The latter are Python functions that can only be used in runtime mode (i.e. with numpy or Cython), they are not compatible with C++ standalone mode (and therefore also not compatible with OpenMP).
You do not need to adapt them to Cython, since they are run independent of the Cython code, they simply interact with the same data structures in memory. In this function, you access all variables as you would do in the general code before/after a run. You could even access the values stored to a StateMonitor so far, for example. You can have as many network_operations as you like.

User-defined functions on the other hand will end up in the generated code, and therefore need to be in the target language (e.g. C++ for standalone mode). You cannot easily access arbitrary variables in such functions, they are rather meant for functions that either pull in data from outside (like a TimedArray), or do a straighforward calculation from their provided arguments (like exprel, for example).

Hope that makes things clearer.

Not completely, sorry.

So what should I do to rescale synaptic weight every 1s of model time. Something like user-defined function which is called in run_regularly?

Could you please point out to or give an example how to access variables in synapses and neurons, in user-defined functions?

@mstimberg, I wonder if custom events can be used for these purposes?
Something like

G = NeuronGroup("""
.....
T_heterosynaptic       : seconds # Clock to trigger custom event for Heterosynaptic plasticity
T_metaplasticity       : seconds # Clock to trigger custom event for Metaplasticity
""",
events={
'heterosyn': 't-T_heterosynaptic >= 1*second',
'metaplast': 't-T_metaplasticity >= 60*second',
}
G.run_on_event('heterosyn', 'T_heterosynaptic=t')
G.run_on_event('metaplast', 'T_metaplasticity=t')


and then

eesyn = Synapses(
.....
on_event={
'heterosyn': 'wee = wee - blah blah blah'},
'metaplast': 'Am  = Am  * blah blah blah'}
)


I indeed think that you can probably avoid having user-defined functions or network operations, which would of course be preferable for full compatibility with all code generation targets (including support for the upcoming Brian2CUDA package). Your above code would be equivalent to simply using

eesyn.run_regularly('wee = wee - ...', dt=1*second)


though.
The problem here is that you need a way to sum over all weights – in a network operation, this is as simple as np.sum(eesyn.wee), but summing “in Brian equations” needs synapses.

From a quick look at the equation:

I gather that the \beta\sum_j\dots term is a constant (initial weights) that can be calculated before the simulation starts – for runtime that would be trivial, but it complicates the code a bit for standalone mode. The text also notes that it should only apply if the first term is bigger than the second term, but we can do this by thresholding with using clip(term, 0, inf). This leaves to calculate two values: the first is N_i^E, the number of non-zero weights a neuron receives – from a cursory look I wasn’t sure whether this is simply meant to count the number of incoming connections (which are already available as N_incoming), or whether the plasticity rule might make weights go to zero which should then not be included. I’ll assume the former for simplicity now. The second is \sum_j J_{ij}^{EE}, i.e. the sum of all weights to a neuron. We should be able to do this with a (summed) variable connecting the synapses to the neurons.

Here’s a simplified example showing how this can work:

from brian2 import *

neurons = NeuronGroup(10, '''total_weights : 1
initial_weights : 1 (constant)''')

# weights grow over time
growth = 0.1/second
beta = 1.08
synapses = Synapses(neurons, neurons, '''dw/dt = growth : 1 (clock-driven)
total_weights_post = w : 1 (summed)''')
# Somewhat hacky, but avoids that total_weights get updated every time step
synapses.summed_updaters['total_weights_post']._clock = Clock(dt = 1*second)

# Heterosynaptic plasticity
synapses.run_regularly('w -= clip(1/N_incoming*(total_weights_post - initial_weights_post), 0, inf)',
when='end', dt=1*second)

# Random connections with 0.2 probability
sources, targets = np.meshgrid(np.arange(len(neurons)), np.arange(len(neurons)))
sources, targets = sources.flatten(), targets.flatten()
pairs = np.random.rand(len(neurons)**2) < 0.2
synapses.connect(i=sources[pairs], j=targets[pairs])

# Random initial weights
weights = np.random.rand(sum(pairs))
synapses.w = weights

# Store initial total weight for each neuron
weights = [np.sum(weights[targets[pairs] == idx]) for idx in range(len(neurons))]
neurons.initial_weights = weights

mon = StateMonitor(synapses, 'w', record=np.arange(sum(pairs)))
run(1.5*second)

plt.plot(mon.t/ms, mon.w.T)
plt.show()


If you did not need standalone support, then this could be much simpler, e.g. connecting synapses could just use synapses.connect(p=0.2), setting random weights would use synapses.w = 'rand()', etc. For standalone support, all this randomness needs to be outside the generated code, since we need to access the initial weights before the actual simulation starts. I did not verify the results from this code thoroughly, but hopefully it gets you going! I think the metaplasticity thing should be straightforward with a run_regularly operation on the Synapses object (referring to a rate estimate in the neuron that gets updated for every spike as part of the reset) – but I might have missed something.

Thank you, so much, @mstimberg!
I completely forgot that run_regularly is available for synapses.

I got almost exactly your code for summing current total synaptic weight through (summed) variables, except hacky code which is a wonderful idea.

If I make it work, I’ll create a pool request with new implementation from paper.

2 Likes