How to model lateral inhibitory and self recurrent synapses?

I am working on implementing a cortex model from this paper: Computational model of prefrontal cortex

The architecture from the paper is a little similar to @mstimberg s Brian2 online tutorial project and in this paper it is constructed using neural networks.

Input Layer has the stimulus which flows into the Memory Layer (provides inhibition) which has recurrent connections for sustained activity. The Output Layer detects the target.

The layer connectivity from the paper is defined as:

Units in the input module make excitatory connections on the response module,
both directly and indirectly through the memory module. Lateral inhibition within
each layer produces competition for representations. Activity from the cue stimulus
flows to the memory module, which is responsible for maintaining a trace of the
relevant context in each trial. Units in the memory module have self-excitatory
connections, which allow for the activity generated by the cue to be sustained in
the absence of input. The recurrent connectivity utilized by each unit in this module
is assumed to be a simpler, but formally equivalent analogue of a fully connected
recurrent cell assembly.

My understanding from this and the architecture diagram is that we need to define the following synapses:

  • Input → Input
  • Input → Memory
  • Input → Output
  • Memory → Memory
  • Memory → Output
  • Output → Output

I am using two 5 X 5 matrices as Input and pass them via PoissonGroup. I have used STDP as a learning rule and this is not given in the paper.

I have the following LIF equations:

# Excitatory
exc_lif_eqs = ''''
dv/dt = -((v - El) - (g_exc * (v - E_exc) / volt) - (g_inh * (v - E_inh) / volt))/taum : volt
dg_exc/dt = -g_exc/taue: volt
dg_inh/dt = -g_inh/taui: volt
'''
# Inhibitory
inh_lif_eqs = '''
dv/dt = -((v - El) - (g_exc * (v - E_exc) / volt) - (g_inh * (v - E_inh) / volt))/taum : volt
dg_inh/dt = -g_inh/taui: volt
dg_exc/dt = -g_exc/taue: volt


# Synapse definitions:
    # Input to Memory
    Syn_inp_mem = b2.Synapses(inp ,
                              mem ,
                              model =
                              ''' w: volt

                                  da_src/dt = -a_src/tau_pre : volt (clock-driven)
                                  da_tgt/dt = -a_tgt/tau_post : volt (clock-driven)

                              ''' ,
                              on_pre = '''

                              g_exc += w
                              a_src += A_pre * mV
                              w = clip(w + a_tgt , 0 * mV , w_max)

                              ''' ,

                              on_post = '''

                              a_tgt += A_post * mV
                              w = clip(w + a_src , 0 * mV  , w_max)

                              ''' ,
                              method = 'euler' , 

                              name = 'Syn_inp_mem'

                              )

    # Input to Output
    Syn_inp_out = b2.Synapses(inp , out , 'w : volt' , on_pre = 'g_exc += w')

    # Input to Input
    Syn_inp_inp = b2.Synapses(inp , inp)

    # Memory to Memory
    Syn_mem_mem = b2.Synapses(mem , mem , 'w: volt' , on_pre = 'g_exc += w')

    # Memory to Output
    Syn_mem_out = b2.Synapses(mem , out , 'w: volt' , on_pre = 'g_exc += w')

    # Output to Output
    Syn_out_out = b2.Synapses(out , out , 'w: volt' , on_pre = 'g_inh -= w')

    
    # Set up connections
    Syn_inp_inp.connect()
    Syn_inp_mem.connect()
    Syn_inp_out.connect()
    Syn_mem_mem.connect()
    Syn_mem_out.connect()
    Syn_out_out.connect()

    Syn_inp_mem.w = 'rand() * w_max'

From the paper, it is specified that units in memory module have self-excitatory and hence I am updating : g_exc += w.

The rest of the self connected modules have inhibition. This is the Synapse plot I get

I can only see the Excitatory g_exc having values while g_inh inhibitory synapse all are 0.

Additionally, I get 5 spikes from the Memory layer (5 neurons in Memory Layer) and 2 spikes from the Output layer (2 neurons defined for Output layer and is not shown here) for a single stimulus. I am not sure how I can pick a target as both neurons spike at the same time (i.e.) 0*ms in the Output layer. I am not sure if my synaptic connections or the equations are correct.

I hope this falls under the modeling category and would greatly appreciate any assistance or advice on modeling the synapses. As the code is lengthy, I am posting a gist link: code

I used the following references for inhibition: Kremer_et_al_2011_barrel_cortex

2 Likes

A couple of things I’ve noticed skimming the code:

  • Since you are specifying a strongly negative reversal potential E_inh, the associated conductivity g_inh should be a positive value (which would also make sense physically). However, Syn_out_out, the only inhibitory connection that I can see, subtracts its (presumably positive) weight from g_inh.
  • Syn_inp_inp does nothing (no dynamics specified), but also, since the inp population is a PoissonGroup, it cannot do anything; PoissonGroups are not intended as synaptic targets. Likely, you’ll want to feed input signals from a PoissonGroup to a normal NeuronGroup instead.
  • With the exception of Syn_inp_mem, your weight values are uninitialised, and therefore zero. The reason you are getting any spikes at all in the output population is that you also aren’t initialising the membrane voltages – they, too, start at 0, which is above threshold.

More generally, I would encourage you to build up a model incrementally, in smaller steps: Start with just the input and one population, check that it does what you expect it to, then add one component at a time, checking at every step that your changes did exactly what they were meant to, and nothing else. Since you’re using github for gists, I assume I needn’t explain the benefits of version controlling the process. :wink:

1 Like

Thank you @kernfel for adding some clarity.

This makes sense not to subtract as I missed the strong negative value given for E_inh. Syn_out_out is the only inhibitory layer because of what the paper specifies and that also is one of my questions as the definition in the paper states lateral inhibition within each layer but the Memory layer is self excitatory. The way I understood was, the Memory layer is excitatory and the only layer (i.e.) Output would be Inhibitory for recurrent connections. I am not sure how to model this requirement any tips would be delightfully accepted :smiley:

This I knew would not have any effect as it is a PoissonGroup. I’ll be removing it.

I did initialize the membrane voltages and got zero spikes. This, I believe is due to flaws in the model design which I want to understand better. I agree with the last point, and the reason I gave the full code is because the forum requires a complete working example and what the user has tried to achieve. I wanted to test out the competing inhibitory and excitatory behavior in selecting a target. Once again thank you very much for your time and any inputs will be much valued :smiley:

1 Like

I’ve had a quick look through the paper. It describes what could be called a classical RNN: The ‘neuron’ units are leaky integrators (eqn 1) with a sigmoidal output nonlinearity (eqn 2). In other words, the units do not spike, but rather produce continuous output, equivalent to a spike rate. In addition, since I couldn’t find any indication to the contrary, I think it’s safe to assume that the weights do not follow Dale’s principle – meaning that a single neuron can be both excitatory and inhibitory.

You can implement this directly in Brian2, but if you’re interested in spiking dynamics and moderately realistic circuitry (i.e., inhibitory and excitatory neurons), then it’s very likely that you’ll have to adjust things like population size, split each section’s populations into exc and inh, and tune the weights until you get output spikes that reflect the input.

1 Like

Hi @touches. I did not have a detailed look at your code, but just a general comment (ignore in case it was already obvious): the model you are referring to is not a spiking network but a population rate model. Reproducing it with a spiking network will therefore not be straightforward. You’ll have to replace each of the “units” (A, B, …) by a population of neurons. This is also the idea behind the lateral inhibition in their model, I think. Each population is supposed to be a mix of excitatory and inhibitory neurons. From a cursory look it also seems they are not using a biological learning rule but are rather calculating the necessary changes outside of the model itself. How to replace this with an STDP rule is certainly non-trivial and an area of active research…

2 Likes

Oops, I was typing in parallel with @kernfel, sorry for the redundancy :smile:

Scooped! Sorry :smiley:

1 Like

Yes @mstimberg, they have used a recurrent neural network and I am trying to model it via Spiking Neural Nets. Lateral Inhibition is achieved by forcing the weights to be negative. I am trying to get a very simple model with excitatory and inhibitory synapses and produce a winner spike for the target. I am trying to use the same connectivity architecture given in the paper. Thank you for clarifying the paper more better.

If my understanding is correct, does each unit (A , B , …) corresponds to a population of neurons defined via a neurongroup? The paper has 4 stimulus conditions and I am trying with two simple image patterns (X and O). In this case, I can have say two units per section. This seems complicated but let me try and get some learning by failing :grinning:

Thank you once again @kernfel

I am trying to model it using Spiking nets. By splitting each section into exc and inh does it mean having neurongroups for each unit and marking their behavior as exc or inh? I’ll tune the weights and like you advised will build the model incrementally and see how it goes.

Yeah, the assumption with rate models like this one is often that each unit or small population represents something like a pool of tightly interconnected excitatory and inhibitory neurons. So in this case, I’d make six populations, one for each of the [population x neuron type] combinations, with the ‘input’ populations sitting downstream of a PoissonGroup or such.

Good luck, let us know how you’re getting on!

1 Like

Just one more remark: I wouldn’t use a separate NeuronGroup for each of the populations, in general it is more efficient to have a small number of NeuronGroups and to use subgroups to model the subdivisions. If all your neurons are of the same type (same equations), you could even use a single NeuronGroup for everything. Something along the lines of

neurons = NeuronGroup(...)
input_A = neurons[:50]
input_B = neurons[50:100]
input_X = neurons[100:150]
input_Y = neurons[150:200]
memory_A = neurons[250:300]
...
4 Likes

Thank you @mstimberg, this is a neat approach which I didn’t think of before.

I started with 2 input arrays and created population neurons receiving input from a PoissonGroup. I am building the model incrementally as advised by @kernfel and while testing the spike propagation from the stimulus to the Input layer, there were no spikes generated and this happens due to the voltage failing to reach threshold. Even if I try to set the reset potential to 0 * mV, there is no spiking activity.

The other point I was not clear was if I used a TimedArray instead of using the input directly into the rates parameter in PoissonGroup as given in the code below, there are no Poisson spikes generated for the first stimulus but the second stimulus gives around 568 spikes yet there is no spiking observed in the NeuronGroups

I referenced couple of equations and models for initial values and they all happen to be within the range of what I have specified and am stumped on what modelling or parameters I may have missed.

    r = -50 * mV
    El = -70 * mV
    Vt = -65 * mV
    taum = 20 * ms

    simple_lif = '''

    dv/dt = (El - v)/taum : volt

    '''

    circle_img = np.array([0, 1, 1, 1, 0,
                           1, 0, 0, 0, 1,
                           1, 0, 0, 0, 1,
                           1, 0, 0, 0, 1,
                           0, 1, 1, 1, 0])

    cross_img = np.array([1, 0, 0, 0, 1,
                          0, 1, 0, 1, 0,
                          0, 0, 1, 0, 0,
                          0, 1, 0, 1, 0,
                          1, 0, 0, 0, 1])

    data = [circle_img , cross_img]

    input_rate_circle = b2.TimedArray(data[0] * 8 * Hz , dt = 1 * ms)

    input_rate_cross = b2.TimedArray(data[1] * 8 * Hz , dt = 1 * ms)

    poi_circle = b2.PoissonGroup(tot_size, rates = data[0] * 8 * Hz , name = 'poi_circle')
    poi_cross = b2.PoissonGroup(tot_size , rates = data[1] * 8 * Hz , name = 'poi_cross')

    neurons = b2.NeuronGroup(total_neurons ,
                               simple_lif ,
                               threshold = 'v > Vt' ,
                               reset = 'v = El' ,
                               refractory = 5 * ms ,
                               method = 'euler' ,
                               name = 'neurons')

    # Input circle
    neu_circle = neurons[0 : 25]
    neu_circle.v = 'El + rand() * (Vt - r)'

    # Input cross
    neu_cross = neurons[25 : ]
    neu_cross.v = 'El + rand() * (Vt - r)'

    # Create Synapses
    Syn_poi_circle = b2.Synapses(poi_circle , neu_circle , 'w : volt' , on_pre = 'v += w')
    Syn_poi_cross = b2.Synapses(poi_cross , neu_cross , 'w: volt' , on_pre = 'v += w')


    # Set up connections
    Syn_poi_circle.connect()
    Syn_poi_cross.connect()


    # Initialize weights
    Syn_poi_circle.w = 'rand() * w_max'
    Syn_poi_cross.w = 'rand() * w_max'


    # Set up spike monitors
    spike_mon_poi_circle = b2.SpikeMonitor(poi_circle)
    spike_mon_poi_cross = b2.SpikeMonitor(poi_cross)
    
    spike_mon_neu_circle = b2.SpikeMonitor(neu_circle)
    spike_mon_neu_cross = b2.SpikeMonitor(neu_cross)


    # Create a network
    net = b2.Network([poi_circle,
                      poi_cross ,
                      neurons,
                      neu_circle ,
                      neu_cross ,
                      spike_mon_poi_circle ,
                      spike_mon_poi_cross ,
                      spike_mon_neu_circle ,
                      spike_mon_neu_cross,
                      Syn_poi_circle ,
                      Syn_poi_cross
                      ])


    net.store()

    total_time = 3000 * ms

    net.run(total_time , report = 'text')

    print('Poisson Circle Spikes: {}'.format(spike_mon_poi_circle.num_spikes))
    print('Poisson Cross Spikes: {}'.format(spike_mon_poi_cross.num_spikes))

    print('Neuron Circle Spikes: {}'.format(spike_mon_neu_circle.num_spikes))
    print('Neuron Cross Spikes: {}'.format(spike_mon_neu_cross.num_spikes))

Output

Poisson Circle Spikes: 258
Poisson Cross Spikes: 196
Neuron Circle Spikes: 0
Neuron Cross Spikes: 0

PS: The previous post is incomplete and was accidently submitted and am unable to edit it

If spikes aren’t appearing, it may be worth recording the membrane potentials, to see just how much of an effect your synapses are having. Maybe it’s a simple case of scaling up the weights some, or lowering the threshold. Another parameter to tweak might be the input rate - perhaps you need more than 8 Hz from some ~10 neurons to drive robust activation.

I’m not quite sure what r is trying to be. Wouldn’t it make sense to initialise voltages to random values between El and Vt?

2 Likes

Thank you @kernfel for your time and yes you are correct, r was just a constant for initializing and I have removed it and increased the rate from 8 Hz to 255 Hz like you suggested and I do get a lot of spikes now :smiley:

Poisson Circle Spikes: 9031
Poisson Cross Spikes: 6902

Neuron Circle Spikes: 8985
Neuron Cross Spikes: 6347

A general neuroscience question is do these large number of spikes mean the network is exploding or are they normal in terms of a moderately realistic biological network?

I recorded the membrane potentials before and after and these were the values I observed

Before

#### Circle ###
<neurons_subgroup.v: array([ -87.45777788, -131.1728713 ,  -75.03174258, -125.59302497,
       -119.83879021, -101.90102738, -118.32517866, -115.13042689,
       -114.71595333,  -89.37221816, -108.33406159, -124.52575257,
       -114.39738138,  -81.90026871,  -76.9430188 , -131.90470806,
       -125.55883388, -110.20652349, -119.23450362, -105.65146453,
        -74.29377306,  -98.68415324, -105.42536855, -119.15198877,
       -131.61301866]) * mvolt>

#### Cross ###
<neurons_subgroup_1.v: array([-121.53774577, -119.24251707, -131.5451968 ,  -70.69858989,
        -97.1654104 , -126.28904867, -117.22975322,  -95.33222155,
       -118.2385209 , -134.2074311 ,  -77.29799948, -123.30626695,
       -125.75562716, -119.17627121, -120.022363  ,  -94.71243317,
        -98.39737607, -105.28268941,  -76.44090875, -124.9472608 ,
       -121.58074211,  -96.30563727,  -73.74548107, -123.95490935,
        -74.63076346]) * mvolt>

After running the simulation

#### Circle ###
<neurons_subgroup_1.v: array([-67.77419659, -65.12439751, -68.76862853, -68.07781664,
       -70.        , -66.94946481, -66.40735491, -65.31432276,
       -65.31275349, -66.25217044, -67.52950803, -65.33906322,
       -66.97287309, -66.96160785, -69.36637644, -69.0090021 ,
       -66.45936198, -66.8985579 , -67.59085533, -66.1044681 ,
       -69.46733768, -69.18801794, -66.94773262, -68.60508163,
       -68.10110016]) * mvolt>

#### Cross ###
<neurons_subgroup.v: array([-67.34575255, -69.60954407, -67.93917088, -68.38213232,
       -69.69334647, -66.90404775, -69.49608766, -65.29627317,
       -68.34473049, -69.75727731, -69.02725163, -65.42390163,
       -67.7126216 , -65.64673079, -69.53262734, -65.70255271,
       -67.39757024, -66.23129594, -65.97173104, -66.90237362,
       -68.67297307, -65.79630057, -66.5472881 , -69.0570643 ,
       -65.25653556]) * mvolt>

Shouldn’t the voltage values be equalt to the reset potential El? I did not record them via StateMonitors, could that be a reason?

1 Like

Nice!

They’d only be equal to El if the neurons all spiked on the final time step, i.e. just after a reset. Note that the refractory period only prevents a spike from being emitted, but does not affect synaptic integration - so even during the refractory period, the input spikes will be added onto the membrane voltage.
To get a better idea of what’s happening dynamically, I’d use a StateRecorder and plot some of the traces.

On biological realism: In cortex, you’ll typically see individual neurons spike quite sparsely, no more than ~20 Hz or so on average, although peak rates can certainly be higher. Of course, this depends a lot on neuron type, brain area, and the wider context.
Of course, your network is so far only excitatory, so it probably isn’t critical to aim for realistic rates just yet. Adding in lateral/recurrent inhibition and excitation will make things more exciting anyway. :slight_smile:

2 Likes

Just to make sure there isn’t some setting that is wrong from our side. I think you should be able to edit your own posts: below your post there should be the edit button image, or does that not work for you?

I did click on the pencil icon to edit it and re-wrote the post and while trying to submit, I got a pop-up saying ’ I don’t have access to view the resource ’ . I tried multiple times but could not submit the post and the same message kept appearing

Thank you @kernfel for clarifying. I’ll be adding in more layers and hoping to observe intersting properties

It is certainly annoying if you can’t edit your own posts… I used my admin rights to make a trivial edit “as you” for testing and it worked, so it does not seem to be a permission issue. Could you maybe try again? It might have been a temporary fluke due to some update on the server side or something like that. If it still doesn’t work, I’ll look into this again.