Taking Synaptic Weight From Other Synapses

Description of problem

I am a new Brian user and also with little experience regarding Python and programming at general.

I am currently trying to re-simulate figures from an article named as Modeling fast stimulus–response association learning along the occipito-parieto-frontal pathway following rule instructions written by Guido Bugmann. In the article there is one formula which is given below:

According to that formula when a spike arrives to a neuron from synapse i , if the weight of other synapses is bigger than weight of synapse i they give a portion of their weights to i. synapse.

I am wondering if there is a way for me to implement this equation by using Brian’s built-in capabilities.

Minimal code to reproduce problem

from brian2 import *

P = PoissonGroup(3, 100 * Hz)

eqs = '''
dv/dt = -v/(50*ms) : 1
w_pool : 1

syn_eq = '''
w : 1 

G = NeuronGroup(1, model=eqs, threshold='v>15/1000', method='exact', reset='v=0.5*(15/1000)')

S = Synapses(P, G, model=syn_eq, on_pre='''
w += 0.8*w_pool     # It takes certain amount of weight from a weight pool.
w_pool = 0.2*w_pool  # Capacity of pool reduced by same amount.


S.w = 0
G.w_pool = 3

run(100 * ms)


What you have already tried

I have tried defining a function by using @network_operation. But since I was not able to get pre and post synaptic indexes of the synapse which received spike so it did not work correctly. Also since network operation works every time step of simulation, but I want to use that function only when a spike comes, for that reason I also needed to define a condition to check whether a spike is received or not.

Expected Output

If I will be able to implement the rule, at the end of simulation all weights should be equal to 1, according to the minimal code I added.

Hi @enyaliosss, a very interesting problem! Unfortunately, this kind of hetero-synaptic plasticity (plasticity where a spike arriving at one synapse depends on and/or affects other synapses) is not very straight-forward to implement in Brian. It should be possible using tools like network_operation or with user-defined functions, but I’ll have to think a bit about it. Just to make sure that I am focussing on the right thing: are you ultimately interested in the rule from the equations (in that case, could you please give a reference for the paper?), or would a simpler version with a global weight pool as in your example be sufficient?

One more quick remark on why Brian does not allow this with its usual equation-based approach: in general, Brian tries to use descriptions that are “mathematical”, in the sense that they do not depend on implementation details, e.g. the order in which synapses are updated. In this case, this is not possible, if several synapses receive spikes in the same time step, it matters in which order they are updated.

Hi @mstimberg , thanks for your reply. I want to implement equations as same as in the article. By the way I added the name of the article at the “Description of Problem” part.
As I stated in “What you have already tried” part, I tried using network_operation . The network_operation part of code I have written is (It may be a little mess, sorry about that):

def weight_updater():
    if S.x != S.x_prev:  # I  tried adding a condition like this, otherwise weight_updater will work everytime step. But it is not working since x and x_prev are not scalar values.
        S.x_prev = S.x
        i_pre = MP.i[-1]  # Assigning index of the pre-synaptic neuron to i_pre by using Spike Monitor.
        i_post = 0  # Assigning index of the post-synaptic neuron to i_post (IF I HAVE MORE THAN ONE NEURON AT MY NEURON GROUP I DO NOT KNOW HOW TO GET POSTSYNAPTIC INDEX)
        updated_w = S.w[i_pre, i_post]  # Assigning the w value of synapse which received spike to a dummy variable.
        synapse_count = sum(S.j == i_post)  # synapse_Count shows how many synapse are connected to the neuron.
        # in this for loop rule (10) is executed, we look for each synapse that connects to the neuron then we check
        # if the weights of the other synapses are bigger than the synapse which received spike.
        # If the other synapse has bigger weight than the synapse that received spike, then rule (10) is applied accordingly.
        for a in range(synapse_count):
            if S.w[a, i_post] > updated_w and a != i_pre:
                updater_w = S.w[a, i_post]  # Assigning to a dummy variable.
                S.w[a, i_post] -= (updater_w * (updater_w - updated_w)) / (3 + updater_w)
                S.w[i_pre, i_post] += (updater_w * (updater_w - updated_w)) / (3 + updater_w)
        S.w[i_pre, i_post] += 0.8 * G.w_pool[i_post]  # Adding values received from pool to w.
        G.w_pool[i_post] = 0.2 * G.w_pool[i_post]  # Updating pool reserve.

Also I am adding full code:

code.txt (3.3 KB)

Hi again, thanks for the additional details – I am about to leave for the weekend, so I’ll have a closer look at this next week.

PS: Note that you can display code more nicely here if you wrap it in triple backticks like this:

# Your Python code
print('looks nice')
1 Like

I was trying to figure out how to do that ^^. Thanks.

Have a nice weekend :slight_smile:

@enyaliosss it seems there is a way to do what you need by holding synaptic in an external array and by implementing C/Cypthon/numpy function to retrieve current synaptic weight and rescale all weights in the array with each presynaptic spikes. It should like similar to my module for delay continues variables dcv4brian. I don’t have time to write the code, but if you decide to go this route, I will be happy to help with more details.

1 Like

Hi @rth , thanks for your reply. I checked the code you wrote, but it seemed a little too complex for me , at first look at least :sweat_smile: . Still I will try to understand it , and see if I can do something similar like that.

Here’s a main idea in a nutshell.
Say we have N neurons, and they receive synapses from a population of M size.
So, we will create an array M*N size and then j^{th} presynaptic spike arrives to i^{th} postsynaptic, on_pre will look like that

on_pre='w += getw(i,j) '

I don’t remember how to get indices of a pre- and postsynaptic neurons in synaptic equations, hope @mstimberg can help here, but say we have i and j somehow.

So now we need to implement getw function (similar to dcpsetget in my module). It should be implemented for 3 targets: C++, Cython, and numpy. For C++ and Cython, we will create c-code, which will be called from inside of C++/Cython implementation. Also when we create this c-code, we will set a static array for synaptic weights. For all of that after we will have an init function (dcvinit in my code) which will create an array in python for numpy if we don’t use c-code, or generate c-code with static array, constant int(s) for sizes of N and M and prototypic function to call from C++/Cython.

Finally, what our getw should do? Because it knows i and j, it reads w from the array, then goes in for loop over all weights of the postsynaptic neurons and updates all of them, after that, it updates w in the array and returns it.

1 Like

Apologies, I did not yet find time to look into this further. Just to quickly answer the question:

This is exactly the way, the pre- and post-synaptic neuron indices are available as i and j.


Hi @mstimberg , is there a way for me to change scalar values inside on_pre statement ?.

If I could define scalar variables inside syn_eq or somewhere else in my code to hold pre and post synaptic indices , and be able to change them inside on_pre to hold pre and post synaptic indices of last spike arrived it could be perfect for me or do I need to do it with a way like @rth suggested ?

Hi again, one of my advisors on my project suggested using custom event feature of Brian to do this job. I did write a code according to his advice with the help of this link Custom events — Brian 2 2.5.1 documentation .

It seems to be working. I am adding the code here.
EDIT: I previosly had some issues with the code so I deleted the post but now I solved it. It seems like it’s working correct.

code_2.txt (2.0 KB)

An example result :

I will also try to explain the code a little:

1-When a spike comes, I first assign w value to a variable called w_to_update to hold its value.
2-Then I also make boolean value of is_weight_release to true, which will trigger the custom event defined as weight_release inside the Neuron Group.
3-weight_release event will trigger weight_release_pathway defined inside on_post of synapse object. Here if the weight of the synapse is bigger than the w_to_update value it’s value will be reduced accordingly, then also the reduced amount will be added to a variable called delta_w.
4-After weight_release event is triggered, run_on_event will also be triggered and will set is_weight_release value to false. Also inside run_on_event I trigger another event (which is also defined inside Neuron Group) named as add_weight by setting is_add_weight to true.
5-When add_weight event becomes active, it will trigger add_weight_pathway which is defined inside on_post of synapses object. Here, when iterator (?) comes to the synapse that received spike hold_i == i condition will become true, so it will add weights taken from other synapses, which were stored inside delta_w, to the necessary w value, also it will take certain amount of weight from w_pool.

I hope my code and explanation is clear enough. (since my English and Brian knowledge is far from perfect ^^)

If you can give me feedback about my code it will be very good for me. (Maybe some improvements or maybe there is a mistake that can cause problems etc.)

Hi @enyaliosss. Unfortunately, I did not yet have time to look into this more detail, but I will tomorrow. From a cursory look at your code, I think an approach like this could indeed work. One minor thing that could be important: try using print(scheduling_summary()) after your simulation to see the order of operations. For two pathways, or two custom operations, the order is somewhat ill-defined, since it is based on the alphabetical ordering of their names… A better approach is to explicitly set their order argument, see Running a simulation — Brian 2 2.5.1 documentation


Hi @mstimberg , thanks for your reply. I did check my scheduling summary, I am adding a screenshot of it below:

At synapses synapses_add_weight_pathway seems to be working before synapses_weight_release_pathway, because of the alphabetical order you mentioned but I was not able to find out how to change order of pathways.

Also I realized another problem with my code during my test runs, if two spikes come within ~0.2 msec (for example one came at 2.6 ms other came at 2.8), it does omit the previous spike. I am adding a screenshot of the problem:

If you look at the spikemonitor.t there are two spikes came at 0.6 ms and 0.7 msec, both from different synapses. But if you look at the synapses.w array only weight updated belongs to the one that came at 0.7 msec. What can cause of this problem ?

Thanks in advance.

Hi @enyaliosss. I finally managed to look into this more closely. Regarding the question of how to change the order of a pathway: you can write something like S.weight_release_pathway.order = 2 to change the order of the weight_release_pathway from S. Your code looks pretty good, but I think there is one major flaw that you cannot avoid if you do it this way: since information about the synapse that should be updated is stored in a single variable on the post-synaptic side, you cannot deal with two spikes for the same target neuron that arrive at the same time step. If you want to handle this situation, you have to do deal with all synapses independently, only shared variables like w_pool can be stored in the post-synaptic cell.

This complexifies things quite a bit, so if you prefer writing source code as suggested by @rth, this is a completely valid option :blush: It is possible with built-in Brian mechanisms, though, by using a group of “relay neurons”. The idea is the following:
Your usual model would look like this:

P = NeuronGroup(...)   # Or PoissonGroup, etc.
G = NeuronGroup(...)
S = Synapses(P, G, 'w : 1', ...)

With “relay neurons”, it would rather look like this:

I.e. there is one “neuron” representing each of the synapses:

# P and G as before
weight_group = NeuronGroup(len(P) * len(G), 
                           '''pre_idx : integer (constant)
                              post_idx : integer (constant)
                              w : 1
                              is_spiking boolean''',
                           threshold='is_spiking', reset='is_spiking = False')
weight_group.pre_idx =  [0, 0, 1, 1, 2, 2]  # could use np.meshgrid
weight_group.post_idx =  [0, 1, 0, 1, 0, 1]  # could use np.meshgrid
step_1 = Synapses(P, weight_group, on_pre='is_spiking = True')
step_1.connect('i == pre_idx_post')
step_2 = Synapses(weight_group, G, on_pre='v_post += w')
step_2.connect('j == post_idx_post')

Instead of directly transmitting a spike, this transmits the spike to the “synapse-representing neuron”, which will spike and transmit the spike to the target group. This needs a bit of tweaking with the when and order parameters to do all this within a single time step, but it is rather straightforward.
What do you gain from this approach? Now you can add a new group of synapses that connect the “synapse-representing neurons” to each other, in case they target the same neuron:

update_synapses = Synapses(weight_group, weight_group, ...)
# Connect if `post_idx` is the same, but don't connect neurons to themselves
update_synapses.connect('post_idx_pre == post_idx_post and i != j')

I am aware that things like post_idx_pre are not very easy to read (feel free to rename things for more clarity), but hopefully the general idea is clear. The nice thing is that the two weight updates are now straightforward to write. See the full code for details:

Full code
from brian2 import *

v_th = 15 / 1000
tau = 50 * ms
P = SpikeGeneratorGroup(5, [0, 1, 2, 3, 4, 1], [10*ms, 30*ms, 30*ms, 50*ms, 50*ms, 50*ms],
                        when='before_thresholds', order=0, name='P')
W_0 = 5

G = NeuronGroup(1, '''dv/dt = -v / tau : 1
                      w_pool :1 ''', method='exact', name='G')
G.w_pool = W_0
MP = SpikeMonitor(P, record=True)

weight_group = NeuronGroup(len(P)*len(G),
                           '''w : 1
                              w_before : 1
                              pre_idx : integer (constant)
                              post_idx : integer (constant)
                              received_spike : boolean
                           threshold='received_spike', reset='received_spike = False',
# All to all connectivity
pre, post = np.meshgrid(np.arange(len(P)), np.arange(len(G)))
weight_group.pre_idx = pre.flatten()
weight_group.post_idx = post.flatten()

# Forward spike to "synapses" and store previous weight
forward = Synapses(P, weight_group, on_pre='w_before = w; received_spike_post = True', name='forward_synapses')
forward.pre.when = 'before_thresholds'
forward.pre.order = 1
forward.connect('i == pre_idx_post')

# Take weight from spike_pool
eta = 0.8
connect_to_neuron = Synapses(weight_group, G,
                                      w_update = eta*w_pool_post
                                      w_pre += w_update
                                      w_pool_post -= w_update''',
                                     'forward_spike': 'v_post += w_pre'},
connect_to_neuron.update_from_pool.order = 0
connect_to_neuron.forward_spike.order = 2
connect_to_neuron.connect('j == post_idx_pre')

# Take weight from stronger synapses connecting to same post-synaptic target
update_from_synapses = Synapses(weight_group, weight_group,
                                on_pre='''delta_w = int(w_post > w_before_pre) * w_post*(w_post - w_before_pre)/(W_0 + w_post)
                                          w_pre += delta_w
                                          w_post -= delta_w''',
update_from_synapses.pre.order = 1
# Connect all synapses that target the same neuron, but don't connect neurons to themselves
update_from_synapses.connect('post_idx_pre == post_idx_post and i != j')

MW = StateMonitor(weight_group, 'w', record=True)  # StateMonitor for monitoring weights.
MW_POOL = StateMonitor(G, 'w_pool', record=True)  # StateMonitor for monitoring w_pool.

G.w_pool = 5


# Drawing Figure 2.
fig, axs = plt.subplots(7, 1, sharex=True)
colors_styles = [('g', 'solid'), ('r', 'solid'), ('b', 'solid'), ('c', 'dashed'), ('m', 'dashed')]

axs[1].plot(MW_POOL.t / ms, MW_POOL.w_pool[0], color='k', label='POOL')
for idx, (c, style) in enumerate(colors_styles):
    # Input spikes
    axs[0].plot(MP.t[MP.i == idx]/ms, len(P)-MP.i[MP.i == idx], '^', color=c)
    # Weights
    axs[2 + idx].plot(MW.t / ms, MW.w[idx], color=c, linestyle=style)
axs[6].set_xlabel('Time (ms)')


I tested it with multiple spikes arriving in the same time step here, and the results look reasonable, I’d say:


It is a very interesting approach and indeed helpful.

@mstimberg , @rth thank you both for your helps.

I was able to write another version by using @network_operation. I wanted to share it here as an another approach.

I tried to explain how I get pre- and post-synaptic indexes inside the code, by using comments.

It seems to be working without any problem even spikes come within 1 ms.

Any feedback about the code is appreciated.

Thanks in advance ! :slight_smile:

Full code:
Figure2_network_operation_commented.txt (3.8 KB)

Hi @enyaliosss . Thanks for sharing the code. From a cursory look I don’t see anything wrong with it. If it works for you and is fast enough, then this can certainly be an alternative. If you find it too slow, there are a number of places to improve the performance a bit – but sometimes this might come at the cost of readability, so I won’t go into details if that is not important for you. There is one straightforward change you could do, though:
You can replace

synapse_count = sum(synapse.j == ind_post)

by this (Brian already provides the number of synapses per target neuron):

synapse_count = synapse.N_incoming_post[ind_post]

Hi @mstimberg , thanks for your reply. I will probably use that code with a bigger scale network, to reproduce simulation in the article so any performance improvements will be helpful for me.