My attempt at implementing STDP on iris dataset

Hi I have some questions with modelling. I hope people with the knowledge can advise on this:

What I am doing: I am training STDP using parameters from Song and Abott (2000,2001) to recognise a particular class of iris dataset, in this case, the first class only.

Architecture: I am using a 4 Poisson input neurons connect to an output neuron, with rates proportional to the intensity of the 4-feature vector of iris data (Class 0) multiplied by 50

Stimulus : 900ms of exposure time, followed by 900ms of no-firing period to allow neuron to decay to resting value

What happened: Weights were randomly initialised multplied by gmax AND after running the simulation for 900ms * the no of samples, all the weights blow up to gmax.

What I expected: If you observe the input data for class 0, the first 2 features are largest, followed by the third and then fourth feature in the order of large to small. Hence the final weights should be in a similar proportion after training but all are at maximum?? What is wrong here

I am not sure what went wrong here, is it the gmax, weight initialisation, STDP parameters or what? I would really appreciate it if someone could advise me on the right values or tell me if the model is wrong? I am kind of stuck for quite a long time doing this (I have to do this…) Thank you!!!

Additional question: If I am training a network to recognise all 3 classes, can I use a 4 input x 6 output neurons for this task? If yes do I encode all the classes into 1 long list as a Poisson spike train and use the same poisson spike train in the 4 x 6 architecture? Pardon my English

This is the link below:

Again, I’m not a machine learner, but I feel that there is a misunderstanding about the role of STDP. It is not a learning rule in the sense that back propagation is, i.e. just presenting some stimuli and using STDP for the connections to a target neuron will not make these connections learn the stimulus class. If you present several different stimuli and simply add more output neurons and connections, you will end up with several output neurons that do the same thing, since they get exactly the same input and there’s no reason they should do anything else. To do some kind of learning in the classification sense, you have to add additional stuff like the competition between the output neurons (which would still be unsupervised learning, i.e. you have to select neurons corresponding to your classes later), or a teaching signal, i.e. making the output neurons spike when you want them to.

To emphasize my point: There are many simple machine learning data set that can easily be learned by an MLP + backprogagation (or similar approaches), such as MNIST or iris. To learn these kind of tasks with a spiking neural network is far from trivial. Simply converting the inputs into Poisson spike trains and using them with spiking neurons and STDP will not work. Doing machine learning with SNNs is an area of active research, and STDP can certainly be useful in certain circumstances (in particular with tasks that actually use the timing of spikes) but is not a general method for learning. The general approaches to doing machine learning tasks with an SNN instead of a classical ANN rather use a variant of the back propagation algorithm that works with spikes.