Pattern recognition in Spiking Neural Nets using Brian2

Hello Brian2 users,

In order to learn Brian2 and build more efficient Spiking Neuron models, I would like to work on implementing simple neuron models used for pattern recognition from research papers. This would benefit both the engineering aspect of coding and modelling and the research aspect of reading the literature.

I would like to propose this paper to implement using Brian2 which looks simple (hopefully) to begin with:
Pattern recognition with Spiking Neural Networks

Brief details from the paper:

Task: Classify an image as either X or 0
Neuron model: The paper uses Izhikevich neurons
Learning rule: STDP is used as the learning rule for training
# of Trials: The authors have simulated for 1000 trials and calculate the accuracy metrics.
Network Architecture: 25 Input Neurons, 5 neurons for hidden layer and 2 output neurons

I would be interested in forming teams to work on this project. My timezone is CST and am looking forward to hearing from you all and learn from this wonderful community.


Hi @touches,
Learning by doing is great, I am interested in the topic, but I have no experience in pattern recognition. I don’t know how much I could be helpful for the team, but I like to follow it.

No worries, the task is simple: to classify the input as either X or 0. Observing the firing pattern for the inputs generated by the Spiking nets one can make classification. I made this post just to see if there are interested folks to form teams an if not I’ll be working on my own and get help from this forum

Awesome, I play around the paper.

1 Like

Sure, I’ll post updates and link to the colab notebook

I want to work on this project. But I haven’t worked on SNN and pattern recognition before. I hope it will give me a good experience to understand how Brian works and what’s SNN. I will start with STDP.

1 Like


I was working on this work.
The network was trained only with circle images(87 circle images). The neuron trained for recognizing a circle fires on its own after the learning phase.
The testing phase is composed of 6 stimuli with a circle pattern and 7 stimuli with cross images(13 images). How to implement to calculate accuracy in this pattern?
I tried to calculate it according to num_spikes, count but there are neurons fire even I give ‘cross_images’. Could you give me some clue, please?

Thank you!

If you only train with one class, there’s no reason for the network to develop any kind of specificity - the easiest way to “solve” training is for the network to respond positively to all inputs. You’ll want to use all relevant classes during training, so that the network is forced to make a decision between the possible classes.
As for accuracy, you’ll want to decide what constitutes a correct answer - presumably a measure linked to the output neurons’ spike counts. Since you’re not using this as a training signal, this can probably be as simple as “the output neuron for the target category has the highest number of spikes”. If the network has successfully learned, it should be straightforward to decide (post hoc) which neuron is responsible for which category.

1 Like

Thank you @kernfel,
I gave 25 circle images for training and 13 images for testing(combined with circle and cross images)
Circle_spike_count = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 8, 8, 11, 15, 18, 21, 21, 21, 24, 24]
len(circle_spike_count) = 38
And this circle_spike_count was taken via:

circle_spike_count = []
#for i in range(1000):
for train in range(25):
  Syn_inp_hid.learning = True
  Syn_hid_out.learning = True
  izh_output.simulate = True
  izh_cross.v = c

  digit_ta_input = b2.TimedArray([inp_data.T]*(volt/second), dt = 10*ms)
  circle_spike_count.append(spike_mon_out.num_spikes) #ORG

#TESTing the simulation
#for k in range(1000):
for test in range(13):
  Syn_inp_hid.learning = True
  Syn_hid_out.learning = True
  idx = [0,0,1,1,0,1,1,1,1,0,0,1,0]    # 46,15%
  #idx = [1,1,1,1,1,1,1,0,0,0,0,0,0]    # 53.84%
  #idx = [0,0,0,0,0,0,1,1,1,1,1,1,1]    # 38.46%
  #idx = [1,1,1,1,1,1,1,1,1,1,1,1,1]   # 0%   because network doesn't know what is cross, and we want to test whether it can recognize circle without giving circle
                                            #we are saying like recognize is it circle or not but giving cross
  #idx = [0,0,0,0,0,0,0,0,0,0,0,0,0]   # 100% because network fed with circle only
  #idx = [1,1,1,1,1,1,1,1,1,1,1,1,0]   # 15%
  #idx = [0,0,0,0,0,0,0,0,0,0,0,0,1]   # 84%
  #idx = [1,0,0,0,0,0,0,0,0,0,0,0,0]
  #idx = random.choice([0,1])

  if idx == 0:
    izh_cross.v = c
    izh_circle.v = 0*mV
    izh_output.simulate = True

  elif idx == 1:
    izh_cross.v = 0*mV
    izh_circle.v = c
    izh_output.simulate = False
  inp_data = data[idx[test]]
  digit_ta_input = b2.TimedArray([inp_data.T]*(volt/second), dt = 10*ms)*ms)
  circle_spike_count.append(spike_mon_out.num_spikes) #ORG

Ref. taken from @touches,
My question is why neuron fires on testing phase differently? In the training phase, they(circle) fired only once, then why when we give external stimulus (testing phase) fires differently?

Thank you

I see a couple of possible causes for unexpected behaviour here:

idx = [0,0,1,1,0,1,1,1,1,0,0,1,0]    # 46,15%
# ...
if idx == 0:

You’ll probably want to be comparing idx[test] instead of the whole list.


I don’t think you’re resetting in between, so num_spikes adds to the previous iteration’s values. I’d subtract the previous spike count to get just the iteration’s own spikes here.

1 Like

Good afternoon!

Thank you very much, I corrected my mistake.

I used this and it has solved my problem. Thank you!
But I have another problem, on the paper simulation time(training) = 15s, but when I use this, it gives killed output on my local PC, I tried to run on Colab, there was also an error like: my session crashed after using all available RAM.
Could you give me one full example of using @network_operation, I looked at Brian2 documentation, but couldn’t find a suitable approach for my case. In my situation my external input won’t change every dt=10*ms(or any other time step), my input must change "10ms after simulating the input layer, the output neurons are given an external simulation" between t=15.5 s to 25 s, Could you give a guide, please?

Good evening, Dear Brian2 Users,

I wanted to know, has this problem solved?

Here(on implementation result) as you can see STDP weights don’t show as expected behaviour like the paper result.

Because that might be the reason for my current problems: firstly, even though I sent 150 circle images for the training phase, all are not firing. Secondly, I really struggled to find the Brian2 function to give input data(change input data) in a given time {15.5, 16, 16.5, 17, 17.5, 18, 18.5, 19,5, 20, 21, 23, 24, 24.5}* seconds. But I used my own implementation. In addition to that, I run the simulation in ms not seconds, because of the previous post issue. However, when I change input data following time ranges, my testing data should be only 13, but in my case, I’m getting 23, what would be the reason?

  1. Does Brian2 have a function that can have the functionality to sent(change) data in a specific (given) time during the simulation?

Thank you!

I don’t have the bandwidth to reply in detail at the moment, but:

You can always run the simulation in chunks, i.e., set a particular input, run for half a second, set the next input, run a little more, etc. – you don’t have to do the entire simulation in a single call to run.

1 Like

Easiest way to do it is something like this:

def change_something(t):
    if not hasattr(change_something, 'already_run') and t>=100*ms:
        # your code here that changes something
        change_something.already_run = True

I just added a new feature request / issue to our tracker for a simpler way of doing this (eg network_operation_once or something like that).

1 Like

Did you just… set an attribute on the function that’s doing the attribute check? That’s pretty cool, I’m totally going to steal that trick. :upside_down_face:


Hello all,
I am interested in pattern recognition using SNNs. Were you able to make a tutorial or implement this paper?