Good afternoon,
Description of problem
I am working on an SNN, which I am trying to train using the Spoken Heidelberg Digits dataset. That said, the data are in the form of arrays containing the spike times and the indices of which neuron spiked. I am running my simulations in C++ standalone mode, due to the time required for training.
The problem is I can’t find a way that works, to load these arrays in a SpikeGeneratorGroup that evolves through time, presenting different images in a single run.
Minimal code to reproduce problem
Timde array + SpikeGeneratorGroup implementation
stimulus = b.TimedArray(list3*ms, dt=125*ms)
stimulus1 = b.TimedArray(padded_lista2, dt=125*ms)
indices=[]
times=[]
input_shd = b.SpikeGeneratorGroup(700, indices, times*ms)
@b.network_operation(dt=125*ms)
def update_spikes():
global indices,times
print('1')
indices=stimulus1(b.defaultclock.t,range(2265))
print(indices)
times=stimulus(b.defaultclock.t,range(2265))
print(times)
input_shd.set_spikes(indices, times)
print(input_shd.events)
mon = b.SpikeMonitor(input_shd)
b.run(1000*ms)
Timed array + NeuronGroup implementation
stimulus = b.TimedArray(list3*ms, dt=125*ms)
stimulus1 = b.TimedArray(padded_lista2, dt=125*ms)
eqs = '''
wlr= int(i==stimulus1(t,i))*int(t==stimulus(t,i)) :1 '''
neurons = b.NeuronGroup(700, eqs, threshold='wlr > 0', reset='wlr = 0', method='exact')
mon = b.SpikeMonitor(neurons)
b.run(1000*ms)
b.plot(mon.t/ms, mon.i, '.k')
What you have already tried
I have tried TimedArrays, one for the indices and one for the times, which are then to be loaded to the SpikeGeneratorGroup, however I figured out that even with the set spikes attribute and through a @Network_Operation ,the spikes can only be changed within run, so it won’t do for me.
Then I tried creating a NeuronGroup which will take the place of the SpikeGeneratorGroup and each neuron firing when its index coincides with the the index in the indices array and the simulation time with the time in the times array. This has not really worked, and even if it does eventually I think it will be terribly inefficient.
Last thing I am considering is loading the entire dataset in a single array for times and a single array for indices, and just transforming the times so it matches the simulation time, which in principle should work. However having an array of circa 20M data points and performing operations on it, I am worried about the overhead it will create to my simulation.
Does anyone have a suggestion on how I could tackle this problem, or has tried something similar?