Different ways of creating multiple synapses

Description of problem

Hello Brian2 Team,

I have two different ways of creating multiple synapses between groups, but am wondering why they are not equivalent.

One way (method 1 in code below) creates multiple synapse objects for connections between each subgroup. The other way (method 2 in code below) creates a single synapse object but (I think) creates the same connections as the first method. However, the resulting output behaviour is different and I have no idea why! So I’m just hoping you might be able to shed light on this.

Thank you!

Minimal code to reproduce problem

import brian2 as b2
import numpy as np

b2.seed(100)

### Parameters
l_name = ['2/3E', '2/3I', '4E', '4I'] #neuron subgroup names
n_layer = [10, 10, 10, 10] #number of neurons in each group

N = sum(n_layer) #Total number of neurons

src = ['2/3E', '2/3E', '4E', '4E'] #source groups for connections
tgt = ['2/3E', '2/3I', '4E', '4I'] #target groups for connections

nsyn_list = [3, 5, 7, 9] #number of synapses for each connection

nn_cum = [0]
nn_cum.extend(np.cumsum(n_layer)) #cumulative numbers for subgroup indexes

#Neuron Model
eqs = '''
dv/dt = (I-v)/tau : 1
I : 1
tau : second
'''

#Create & initialise Neurons
neurons = b2.NeuronGroup(N, eqs, threshold='v>1', reset='v = 0', method='exact')
neurons.I = 2
neurons.tau = 10*b2.ms

n_groups = [] # Stores NeuronGroups, one for each population

#Create dictionary of subgroups
for r in range(0, len(n_layer)):
    n_groups.append(neurons[nn_cum[r]:nn_cum[r+1]])
n_dict = dict(zip(l_name, n_layer))

syn_group = [] # Stores connections

##### Synapse Creation method 1
# for i in range(len(src)):
#     syn_group.append(b2.Synapses(n_groups[list(n_dict.keys()).index(src[i])], n_groups[list(n_dict.keys()).index(tgt[i])],  on_pre='v_post += 0.2'))
        
#     nsyn = nsyn_list[i]
    
#     pre_index = np.random.randint(n_dict[src[i]], size=nsyn)
#     post_index = np.random.randint(n_dict[tgt[i]], size=nsyn)
#     syn_group[-1].connect(i = pre_index, j = post_index)
#     syn_group[-1].delay = 1.4*b2.ms

##### Synapse Creation method 2
syn = b2.Synapses(neurons, neurons, on_pre='v_post += 0.2')

for i in range(len(src)):
    nsyn = nsyn_list[i]

    cum_names = list(n_dict.keys())
    cum_vals = [0]
    cum_vals.extend(np.cumsum(list(n_dict.values())))
    if cum_names.index(src[i]) > 0: 
        idx1_src = cum_names.index(src[i]) 
        idx2_src = cum_names.index(src[i]) + 1
    else: 
        idx1_src = 0
        idx2_src = cum_names.index(src[i]) + 1
    if cum_names.index(tgt[i]) > 0: 
        idx1_tgt = cum_names.index(tgt[i])
        idx2_tgt = cum_names.index(tgt[i]) + 1
    else: 
        idx1_tgt = 0
        idx2_tgt = cum_names.index(tgt[i]) + 1
    pre_index = np.random.randint(min(cum_vals[idx1_src], cum_vals[idx2_src]), max(cum_vals[idx1_src], cum_vals[idx2_src]), size=nsyn)
    post_index = np.random.randint(min(cum_vals[idx1_tgt], cum_vals[idx2_tgt]), max(cum_vals[idx1_tgt], cum_vals[idx2_tgt]), size=nsyn)

    syn.connect(i = pre_index, j = post_index)
    syn.delay = 1.4*b2.ms
syn_group.append(syn)

#Monitor
M = b2.StateMonitor(neurons, 'v', record=True)

#Run
b2.run(100*b2.ms)

#Plotting neuron activity
b2.plot(M.t/b2.ms, M.v[0], label='Neuron 0')
b2.plot(M.t/b2.ms, M.v[1], label='Neuron 1')
b2.xlabel('Time (ms)')
b2.ylabel('v')

#Plotting connectivity
def visualise_connectivity(S):
    for n in range(len(S)):
        Ns = len(S[n].source)
        Nt = len(S[n].target)
        b2.figure(figsize=(10, 4))
        b2.subplot(121)
        b2.plot(np.zeros(Ns), np.arange(Ns), 'ok', ms=10)
        b2.plot(np.ones(Nt), np.arange(Nt), 'ok', ms=10)
        for i, j in zip(S[n].i, S[n].j):
            b2.plot([0, 1], [i, j], '-k')
        b2.xticks([0, 1], ['Source', 'Target'])
        b2.ylabel('Neuron index')
        b2.xlim(-0.1, 1.1)
        b2.ylim(-1, max(Ns, Nt))
        b2.subplot(122)
        b2.plot(S[n].i, S[n].j, 'ok')
        b2.xlim(-1, Ns)
        b2.ylim(-1, Nt)
        b2.xlabel('Source neuron index')
        b2.ylabel('Target neuron index')

visualise_connectivity(syn_group)

Expected Output

I would expect the two synapse creation methods to give the same output. The synaptic connections look equivalent in the (visualise connectivity) plots to but the actual behaviour is different.

Actual output (if relevant)

Connectivity looks the same in both methods but the neuron behaviour does not.


Screen Shot 2022-03-15 at 9.35.21 PM

Method 2


Screen Shot 2022-03-15 at 9.37.33 PM

Hi @lmun373. I have to say I was quite confused by this, since as you say, the connections are exactly the same. It turns out that the issue is something more general: the issue is that when you store the synapses in the syn_group list, Brian’s run call cannot “see” them and they are simply ignored during the simulation: Running a simulation — Brian 2 2.5.0.3 documentation
If you have objects such as NeuronGroup, Synapses, etc. that are only stored in containers like dicts or lists, and not directly visible as variables like your syn variable, then you have to manually create a Network object and add the objects. Note that you do not need to add the subgroups of the main NeuronGroup, since they are only for convenience and do not do anything during a simulation.
In your example, you’d replace b2.run(100*b2.ms) by

net = b2.Network(b2.collect())
net.add(syn_group)
net.run(100*b2.ms)

You should then get the same behavior for both synapse creation methods.
Two general remarks:

  • your second variant of creating synapses, i.e. with a single Synapses object is more efficient, since it only will generate/compile code for the synapses once, instead of once for each subgroup-subgroup connection
  • You can simplify your handling of the indices in the second method quite a bit, by accessing the start and stop attributes of the subgroups (which give you the start and end indices of the subgroup in the original NeuronGroup).

Thanks @mstimberg! That worked on that example. However, I tried to implement it into my larger scale model and I still get different activity. Would the size of the network be significant in this? I am trying to model around 78,000 neurons and 300 million synapses.

Here is the code: (which reads in a table of connections accessible here)

import brian2 as b2
from brian2 import ms, mV, pF, pA
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

b2.seed(100)

### Parameters
#Table of connections
con_tab = pd.read_csv('Connection_Tables/shimoura1.csv', delimiter=' ', index_col=False)

#Cortex Layer names
l_name = ['2/3E', '2/3I', '4E', '4I', '5E', '5I', '6E', '6I']
n_layer = [20683, 5834, 21915, 5479, 4850, 1065, 14395, 2948]
bg_layer = [2000, 1850, 2000, 1850, 2000, 1850, 2000, 1850]

simulation_time = 250*b2.ms

d_ex = 1.5*ms      	# Excitatory delay
std_d_ex = 0.75*ms 	# Std. Excitatory delay
d_in = 0.80*ms      # Inhibitory delay
std_d_in = 0.4*ms  	# Std. Inhibitory delay
tau_syn = 0.5*ms    # Post-synaptic current time constant
tau_m   = 10.0*ms		# membrane time constant
tau_ref = 2.0*ms		# absolute refractory period
Cm      = 250.0*pF		# membrane capacity
v_r     = -65.0*mV		# reset potential
theta    = -50.0*mV		# fixed firing threshold
w_ex = 87.8*pA		   	# excitatory synaptic weight
std_w_ex = 0.1*w_ex     # standard deviation weigth
g = 4
bg_freq = 8
b2.defaultclock.dt = 0.1*ms    # timestep of numerical integration method

# Leaky integrate-and-fire model equations
eqs = '''
	dv/dt = (-v + v_r)/tau_m + (I+Iext)/Cm : volt (unless refractory)
	dI/dt = -I/tau_syn : amp
	Iext : amp
	'''
# Reset condition
eqs_reset = 'v = v_r'
    
eqs_syn = 'w:amp'

# equations executed only when presynaptic spike occurs:
# for excitatory connections
eqs_pre = 'I_post += w'

#Function to find number of synapses
def find_nsyn(p_conn, n_pre, n_post):
    return int(np.log(1.0-p_conn)/np.log(1.0 - (1.0/float(n_pre*n_post))))

N = sum(n_layer) #Total number of neurons

nn_cum = [0]
nn_cum.extend(np.cumsum(n_layer)) #cumulative numbers for subgroup indexes

#Create & initialise Neurons
neurons = b2.NeuronGroup(N, eqs, threshold='v>theta', reset=eqs_reset,
                        method='Linear', refractory=tau_ref)
        
neurons.v = '-58.0*mV + 10.0*mV*randn()'
neurons.I = 0.0*b2.pA      # initial value for synaptic currents
neurons.Iext = 0.0*b2.pA   # constant external current

n_groups = [] # Stores NeuronGroups, one for each population

#Create dictionary of subgroups
for r in range(0, len(n_layer)):
    n_groups.append(neurons[nn_cum[r]:nn_cum[r+1]])
n_dict = dict(zip(l_name, n_layer))

#Input
syn_in  = []
for r in range(0, len(n_groups)):
    syn_in.append(b2.PoissonInput(n_groups[r], 'I', bg_layer[r], bg_freq*b2.Hz, weight=w_ex))

syn_group = [] # Stores connections

##### Synapse Creation method 1
for i, r in con_tab.iterrows():
    src = r.loc['Source'] + r.loc['SourceType']
    tgt = r.loc['Target'] + r.loc['TargetType']
    nsyn = find_nsyn(r.loc['Pmax'], n_dict[src], n_dict[tgt])
    
    syn_group.append(b2.Synapses(n_groups[list(n_dict.keys()).index(src)], n_groups[list(n_dict.keys()).index(tgt)], eqs_syn, on_pre='I_post += w'))
    
    pre_index = np.random.randint(n_dict[src], size=nsyn)
    post_index = np.random.randint(n_dict[tgt], size=nsyn)
    syn_group[-1].connect(i = pre_index, j = post_index)
    syn_group[-1].delay = 'clip({}*ms + {}*randn()*ms, 0.1*ms, inf*ms)'.format(r.loc['Delay'], r.loc['Dstd'])
    syn_group[-1].w = '(({} + {}*randn()))'.format(r.loc['Weight'], r.loc['Wstd'])

#### Synapse Creation method 2
# syn = b2.Synapses(neurons, neurons, eqs_syn, on_pre='I_post += w')

# for i, r in con_tab.iterrows():
#     src = r.loc['Source'] + r.loc['SourceType']
#     tgt = r.loc['Target'] + r.loc['TargetType']
#     nsyn = find_nsyn(r.loc['Pmax'], n_dict[src], n_dict[tgt])
    
#     pre_index = np.random.randint(n_groups[l_name.index(src)].start, n_groups[l_name.index(src)].stop, size=nsyn)
#     post_index = np.random.randint(n_groups[l_name.index(tgt)].start, n_groups[l_name.index(tgt)].stop, size=nsyn)

#     syn.connect(i = pre_index, j = post_index)
#     syn.delay = 'clip({}*ms + {}*randn()*ms, 0.1*ms, inf*ms)'.format(r.loc['Delay'], r.loc['Dstd'])
#     syn.w = '(({} + {}*randn()))'.format(r.loc['Weight'], r.loc['Wstd'])
# syn_group.append(syn)

#Monitor
SM = b2.SpikeMonitor(neurons)

#Run
net = b2.Network(b2.collect())
net.add(syn_in, syn_group)
net.run(simulation_time)

#Plot spiking activity
plt.plot(SM.t/ms, SM.i, 'k.')

Output (raster plot) from method 1
Screen Shot 2022-03-16 at 1.10.39 PM

Output (raster plot) from method 2
Screen Shot 2022-03-16 at 1.11.22 PM

Thank you for your advice also, I made the second method shorter using the start stop indices for each neuron group.

This is based on a previous model by Shimoura et al. 2018, and the activity matches the output from the first method. But I did want to make it more efficient with fewer synapse objects.

Hmm, no I don’t see how the size should play a role here. Things are different when you go over the limit of 32bit integers (i.e. > ~2·10⁹), since a few things like indices hard code this data type. But you still seem to be at a safe distance for the total number of synapses here. The only “wrong” thing in your code is that in method 2 you add the group syn twice to the network. Once with the automatic collect() and once via the manual addition of syn_group. But Brian stores the objects in a set internally, so duplicate addition shouldn’t matter.

Unfortunately I cannot really debug this much further without the data. To check whether it has something to do with the size, maybe reduce the size of all layers by a factor of 10 or so and check whether you still see a difference (ignoring the actual result, just looking whether it is the same). You could also check whether the generated synapses are really all exactly the same. Note that in addition to syn.i and syn.j, Synapses objects that are constructed with subgroups also have the attributes _source_i and _target_j, which give you the indices in the underlying NeuronGroup. I.e., the values of _source_i and _target_j from method 1 should corresponds to the i and j values from method 2.

Thanks so much for your advice it has definitely helped me with debugging.

There was no attribute _source_i and target_j, but looking at the documentation I believe _synaptic_pre and _synaptic_post give the underlying indices?

So I have been looking at what part of my model results in different synaptic indices and it seems like using 'randn' in the delay or weight synapse variable makes the synapse indices not match between the two methods. I have been using 'randn' in the delay and weight definition like this:

syn.delay = 'clip({}*ms + {}*randn()*ms, 0.1*ms, inf*ms)'.format(r.loc['Delay'], r.loc['Dstd'])
syn.w = '(({} + {}*randn()))'.format(r.loc['Weight'], r.loc['Wstd'])

I have been setting the seed using:

> b2.seed(100)
> np.random.seed(seed=100)

Also, interestingly if I use 'randn' in the initialisation of the voltages of the neurons such as:
>neurons.v = '-58.0*mV + 10.0*mV*randn()' the synapse indices are identical for both methods but the output spiking behaviour is different.

Any idea why using 'randn' would result in such a difference?

I’ve attached my code file that I’ve been testing with. Couldn’t upload the data file but I did make it accessible here.
test3.py (5.5 KB)

The _source_i and _target_j attributes only exist if you construct synapses from subgroups that do not start at 0 – you are right that _synaptic_pre and _synaptic_post are more robust, since they always exist.

I now realize why you do not get the exact same random numbers in both cases (BTW: b2.seed is enough, it sets numpy’s random seed as well). Random numbers will only be the same if you generate the same amount of random numbers in the same order. When you run this code in “Method 2”

syn.connect(i = pre_index, j = post_index)
syn.delay = 'clip({}*ms + {}*randn()*ms, 0.1*ms, inf*ms)'.format(r.loc['Delay'], r.loc['Dstd'])
syn.w = '(({} + {}*randn()))'.format(r.loc['Weight'], r.loc['Wstd'])

you create nsyn new connections, but you set delay and w for all existing synapses. After the first round, you will therefore create more random numbers than with the other method, which will affect both the connection indices and the weights. I did not run things, but I believe that if you change it to only set the newly created synapses, it should give you the exact same numbers in both cases:

syn.connect(i = pre_index, j = post_index)
syn.delay[-nsyn:] = 'clip({}*ms + {}*randn()*ms, 0.1*ms, inf*ms)'.format(r.loc['Delay'], r.loc['Dstd'])
syn.w[-nsyn:] = '(({} + {}*randn()))'.format(r.loc['Weight'], r.loc['Wstd'])

Ah thank you so much! That solved my problem. Really appreciate your help.