Dynamically loading SpikeTimes in a SpikeGeneratorGroup on C++ Standalone Mode

Good afternoon,

Description of problem

I am working on an SNN, which I am trying to train using the Spoken Heidelberg Digits dataset. That said, the data are in the form of arrays containing the spike times and the indices of which neuron spiked. I am running my simulations in C++ standalone mode, due to the time required for training.

The problem is I can’t find a way that works, to load these arrays in a SpikeGeneratorGroup that evolves through time, presenting different images in a single run.
Minimal code to reproduce problem

Timde array + SpikeGeneratorGroup implementation

stimulus = b.TimedArray(list3*ms, dt=125*ms)
stimulus1 = b.TimedArray(padded_lista2, dt=125*ms)
indices=[]
times=[]
input_shd = b.SpikeGeneratorGroup(700, indices, times*ms)
@b.network_operation(dt=125*ms)
def update_spikes():
    global indices,times
    print('1')
    indices=stimulus1(b.defaultclock.t,range(2265))
    print(indices)
    times=stimulus(b.defaultclock.t,range(2265))
    print(times)
    input_shd.set_spikes(indices, times)
    print(input_shd.events)
mon = b.SpikeMonitor(input_shd)
b.run(1000*ms)

Timed array + NeuronGroup implementation

stimulus = b.TimedArray(list3*ms, dt=125*ms)

stimulus1 = b.TimedArray(padded_lista2, dt=125*ms)

eqs = '''

wlr= int(i==stimulus1(t,i))*int(t==stimulus(t,i)) :1 '''

neurons = b.NeuronGroup(700, eqs, threshold='wlr > 0', reset='wlr = 0', method='exact')

mon = b.SpikeMonitor(neurons)

b.run(1000*ms)

b.plot(mon.t/ms, mon.i, '.k')

What you have already tried

I have tried TimedArrays, one for the indices and one for the times, which are then to be loaded to the SpikeGeneratorGroup, however I figured out that even with the set spikes attribute and through a @Network_Operation ,the spikes can only be changed within run, so it won’t do for me.

Then I tried creating a NeuronGroup which will take the place of the SpikeGeneratorGroup and each neuron firing when its index coincides with the the index in the indices array and the simulation time with the time in the times array. This has not really worked, and even if it does eventually I think it will be terribly inefficient.

Last thing I am considering is loading the entire dataset in a single array for times and a single array for indices, and just transforming the times so it matches the simulation time, which in principle should work. However having an array of circa 20M data points and performing operations on it, I am worried about the overhead it will create to my simulation.

Does anyone have a suggestion on how I could tackle this problem, or has tried something similar?

1 Like

Great question. You could do this with a network_operation, but it is not compatible with C++ standalone mode, so not what you need. There is a way to make this work with a TimedArray, but since you can have more than one neuron spiking at every time step (and every neuron can spike more than once), I don’t think there is a way to make this easily work with the list format. Instead, you’ll have to transform the input into a matrix representation of the spike pattern, i.e. zero or ones for each neuron and time step. This is obviously very wasteful, but if you present one stimulus at a time it should be ok.

So, I am seeing two approaches: Either you move everything into two long arrays as you suggested, and do a very long simulation. This is actually the most efficient approach, and if a long, uninterrupted simulation works for you, then I’d say go for it. 20 million data points is still reasonable and should take up a few 100s of Megabytes of memory, which should be ok on most systems. I assume your dataset is one of those here: Spiking Heidelberg Digits and Spiking Speech Commands – Zenke Lab? Here’s an example code moving everything into a long data structure (using 1.5s for each stimulus, so there should be a short break between them):

import h5py
from brian2 import *
import matplotlib.pyplot as plt
defaultclock.dt = 0.01*ms
set_device('cpp_standalone')

data = h5py.File('./shd_test.h5', 'r')
spikes = data['spikes']
times, indices = spikes['times'], spikes['units']

new_indices = np.concatenate([ind for ind in indices])
new_times =  np.concatenate([np.asarray(t, dtype=np.float32) + 1.5*stim_idx
                             for stim_idx, t in enumerate(times)])*second

input_spikes = SpikeGeneratorGroup(700, new_indices, new_times)
spike_mon = SpikeMonitor(input_spikes)
run(4.5*second)

plt.plot(spike_mon.t/ms, spike_mon.i, ',k')
plt.xlabel('Time (ms)')
plt.ylabel('Neuron index')
plt.show()

[Note that the files store the spike times as 16bit floating point values which can lead to problems if you add large numbers to them – this is why I converted them to 32bit]
For this example I only run it for 4.5s (i.e. 3 stimuli), but in practice you’d run it for a much longer time, obviously. Here’s the result plot

As mentioned earlier, an alternative is to transform the spikes for each stimulus presentation into a 0-1 pattern and use this with a TimedArray. This is less efficient, since it needs to check a value for each neuron at every time step, but if you have a non-trivial neuron and synapse model in your simulation, this inefficiency might not matter much. Here’s the code, which runs each stimulus presentation in a separate simulation using the run_args feature (you find quite a few discussions around this rather recent feature here on the discussion forum). Note that if you do a simulation with plastic synapses, you’d probably also store the weights from the previous simulation and set them as the initial values for the next simulation via the run_args feature.

import h5py
from brian2 import *
from brian2.core.functions import timestep
import matplotlib.pyplot as plt
set_device('cpp_standalone', build_on_run=False)

data = h5py.File('./shd_test.h5', 'r')
spikes = data['spikes']
times, indices = spikes['times'], spikes['units']
# Dummy values, actual values will be set during each run
# Note that TimedArrays cannot currently store booleans, we use int8 instead
should_spike = TimedArray(np.zeros((int(round(1.2*second/defaultclock.dt)), 700), dtype=np.int8),
                          dt=defaultclock.dt)
input_spikes = NeuronGroup(700, '', threshold='should_spike(t, i) == 1', reset='')
spike_mon = SpikeMonitor(input_spikes)
run(1.2*second)
device.build(run=False)  # Nothing has been simulated yet

for stim_idx, (t, ind) in enumerate(zip(times, indices)):
    print("Running stimulus", stim_idx)
    spikes = np.zeros((int(round(1.2*second/defaultclock.dt)), 700), dtype=np.int8)
    spikes[timestep(np.asarray(t, dtype=np.float32)*second, defaultclock.dt), ind] = 1
    device.run(run_args={should_spike: spikes})
    plt.figure()
    plt.plot(spike_mon.t/ms, spike_mon.i, ',k')
    plt.title(f"Stimulus {stim_idx}")
    plt.xlabel('Time (ms)')
    plt.ylabel('Neuron index')
    if stim_idx >=3:  # For demonstration purposes only
        break
plt.show()

This creates a plot for each stimulus presentation, here’s an example:


In the above code, I break the loop after 4 stimuli, but of course you wouldn’t do that ­– you probably do not want to create a plot window for each stimulus either :wink:

Hope that gets you going!

PS: I edited your post to show the code more clearly. You can enclose it in three backticks (or selected “pre-formatted text” or press Ctrl+E in the editor) like this

```
print("My code")
```

so that it gets nicely formatted and highlighted and easier to copy&paste.

1 Like

Good evening,
Thanks for the quick reply, I think the first solution suits me best, given how I 've set up my simulations thus far. I 'll try to follow up with another post here on how it has worked and some relative metrics. Thanks again!

1 Like