Outgoing and Incoming weight normalization

Hi, I did search for something related but without success. I hope you could help me.
Description of problem

When the weights dynamics are regulated by some function, it is important to assure that the weights do not reach saturation which would provoke a degeneracy of actions since all synapses would be equally maximally active. So I wanted to implement a normalization in 2 ways. The first would be to normalize, all the Incoming synapses to a neuron and the other would be the normalization of the Outgoing synapses. (I have omitted below the structure of the STDP for simplicity but the weights should change over time)

Minimal code to reproduce problem

The normalization of Incoming synapses can be achieved by (summed):

S = Synapses(G_pre, G_post, '''w : 1
             sumw_post = w : 1(summed)
              ''', on_pre=
             '''v_post += w
                 w /= sumw
             ''')

Since it sums all weights arriving to the post-synaptic neuron.

However I didn’t manage to find an efficient way to sum all the outgoing weights from each neuron.

What you have aready tried

I tried to modify the code for the Incoming normalization:

S = Synapses(G_pre, G_post, '''w : 1
             sumw_pre = w : 1(summed)
              ''', on_pre=
             '''v_post += w
                 w /= sumw
             ''')

However in this case sumw results in an array full of zeros.

Am I missing something or is there a better way to achieve such result? In general I could create the respective weight matrix (pre, post) and the normalization constant would be the sum over rows for Outgoing normalization and the sum over the column would be the Incoming normalization. However implementing this in a network operation would be pretty slow.

Thank you in advance for your time!

Hi @KIllua07. Your approach is correct, actually, there is just a minor caveat: In the synaptic on_pre statement: variables that are not synaptic variables (w in your example) are considered post-synaptic by default. This is mostly for “historical” reasons, because many on_pre statements are something like v += w or g_e += w and naturally refer to the post-synaptic v or g_e.
Due to this default, your second example refers to sumw_post in the normalization, dividing by the values in the post-synaptic population that haven’t been set. If you replace the line by w /= sumw_pre, it should work as expected!

Please let us know if you run into other issues.

PS: For better readability (and copy&pasteability), please include code as “pre-formatted text” (Ctrl+E), i.e.with triple backticks like this:

```
# A comment
print("Python code")
```

(I’ve edited your original post accordingly).

Thanks a lot for your fast reply!
It does work perfectly and I have a better understanding on what I can do.
Since I am here hope you don’t mind if I proceed with the follow up question, which is: is there a way to use the max value of incoming or outgoing synapses?

Thanks a lot!

H, interesting question! It would be rather easy to have the “all-time maximum” for each neuron available – after setting the initial value, each on_pre could do something like max_w_post = max(max_w_post, w) to update the value to the new maximum. But this would never lower the value, so is not applicable to the general situation where weights increase and decrease. Also, maybe surprisingly, Brian’s syntax does not support the max function! This is mostly because there are different variants of this function, but I think it would actually be a good thing to have it. We do have a clip function, though, which we can use here instead: max(x, y) is equivalent to clip(x, y, inf). So, instead of the approach described above, we can instead use run_regularly functions, which do a similar thing as what summed does under the hood. After defining a max_w variable in the post-synaptic target population, we can add:

# Initialize max_w for each neuron
G_post.run_regulary("max_w = -1000", when="start")  # some value lower than the lowest weight
#  Set max_w from current w values
S.run_regularly(("max_w_post = clip(max_w_post, w, inf", when="after_start")

This is not terribly efficient, since it updates the values of max_w every time step, even if none of the weight changed, but this is the same for the normalization based on summed variables.

Let us know whether this works for you!