# Description of the problem

Currently, I’m in the process of modifying the well-known model from Diehl&Cook2015 to make it use a true event-based dataset (N-MNIST) instead of the standard MNIST followed by a rate conversion. For those unfamiliar with that paper and/or dataset, they’re described below:

Description of Diehl&Cook approach and N-MNIST dataset

In the original code, the images from MNIST are presented to the network by converting the pixel intensities into spike rates (which are generated by a PoissonGroup). This happens in a looping fashion during training or testing: for each sample image, the Poisson group gets the computed converted rate, the network is run for a certain presentation time, and then for a certain resting time. After that, a new iteration starts with a new image from the dataset, and so forth. The PoissonGroup fulfills the role of the input layer, and then is connected to the following layer in an all-to-all fashion.

N-MNIST is stored as an array of event-based images. That is, each element of the dataset (each event-based image) is in turn an array of events of variable length (as, depending on the image, more or fewer events were generated during the recording process). Each of those events consists of four values:

• The x coordinate of the event (the images are 34x34).
• The y coordinate.
• The polarity of the event (it can be ON or OFF). I’ll ignore this for now.
• The timestamp in microseconds.

That being said, I intend to use a SpikeGeneratorGroup, which can take the events’ timing and location as arguments to produce spikes, to replace the PoissonGroup and read the event data directly.
Yet, I encounter the the following error:

ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step.

# Minimal code to reproduce the problem

Since the code needed to reproduce this error involves downloading the dataset and some other non-standard modules, I’ll only show the key lines of code causing the error. At the beginning of the script, I set the default clock to 0.5ms (defaultclock.dt = 0.5 * ms). Then I load the dataset, define many functions and variables, as well as the layers of the network. The layer that causes problems is the input layer, which I define like this:

input_groups[name + 'e'] = b.SpikeGeneratorGroup(1156, event_coordinates, event_timestamps)


The size of the network is 1156 because of the dataset characteristics (the event-based images are 34x34). Then, event_coordinates and event_timestamps are 1D numpy arrays of integers (the latter multiplied by b.us for dimensional coherence. Actually, I don’t care about those arrays at here. Later in the code, all the event samples are given as input to the SpikeGeneratorGroup by using the set_spikes() method, but with this line I only want to define the input layer. However, I need to provide some arrays as arguments or it will raise an error, but that’s the only reason those are there (I guess I could’ve given ...(1156, [0], [0]*b.us) and it wouldn’t have made a difference).
Despite the weird naming, input_groups[name + 'e'] results in a single spike generator group (contained in a dictionary), so don’t pay attention to the [name + 'e'] part.

Later, a Network object is created and run:

for obj_list in [neuron_groups, input_groups, connections, rate_monitors, spike_monitors, spike_counters]:
for key in obj_list:

... # some non-related variable definitions

net.run(0 * ms)


It is in that net.run(0 * ms) where the error occurs. I am not even sure what the error is. On one hand, there is the
ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step..
For that, I’ve changed the timestep of the SpikeGeneratorGroup, with no success. But then, later in the traceback it says:

Object was created here (most recent call only, full details in debug log):
File "/home/jul/PycharmProjects/Multimodal-SNN/Diehl&Cook_spiking_MNIST.py", line 488, in <module>
input_groups[name + 'e'] = b.SpikeGeneratorGroup(n_input, event_coordinates, event_timestamps)


I don’t know the meaning of such error or what is exactly the problem. Anyone could lend me a hand? Thanks a lot in advance!

PS: if more info about is needed I’ll gladly provide it, I just didn’t want to overflow the post with too much info.

# What I have already tried

• I tried setting a very small dt for the SpikeGeneratorGroup, 1 \mu s specifically, but I still got the same error.
• I’ve also tried running the network for longer (i.e. 100ms instead of 0).

# Full traceback of the error

Traceback (most recent call last):
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/core/network.py", line 897, in before_run
obj.before_run(run_namespace)
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/input/spikegeneratorgroup.py", line 200, in before_run
raise ValueError('Using a dt of %s, some neurons of '
ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/core/network.py", line 899, in before_run
raise BrianObjectException("An error occurred when preparing an object.", obj) from ex
brian2.core.base.BrianObjectException: Error encountered with object named "spikegeneratorgroup".
Object was created here (most recent call only, full details in debug log):
File "/home/jul/PycharmProjects/Multimodal-SNN/Diehl&Cook_spiking_MNIST.py", line 488, in <module>
input_groups[name + 'e'] = b.SpikeGeneratorGroup(n_input, event_coordinates, event_timestamps)

An error occurred when preparing an object. (See above for original error message and traceback.)


I don’t know the details of the values in N-MNIST, but just to explain the error: a simulation in Brian advances in time steps of length dt. At each time step, the SpikeGeneratorGroup checks whether any of its “neurons” should spike during that time step. The error you are getting means that in the list of spike times/indices you provide, a single neuron is supposed to spike twice during a single time step. Brian objects like SpikeGeneratorGroup can only spike once during a time step, so this is impossible. To deal with this, you can either change the dt of the SpikeGeneratorGroup, slow down the spike times, or remove spikes that fall into the same time step. The last solution would be the best if you have only very few of these cases, I guess. Note that if you reduce the dt of the SpikeGeneratorGroup, you will also have to reduce the dt of your overall simulation, at least for the Synapses connecting the SpikeGeneratorGroup to actually deal with these spikes. And I’m not sure if it really makes much sense to have input spikes separated by a few µs…

In any case, make sure you are reading in the files correctly and do not make any mistakes in converting the coordinates to indices. If you map the spikes from different coordinates to the same index for example, you will definitely run into issues.

1 Like

@Jul94 Hi, I used the N-MNIST dataset around last Christmas, and the same error just occur.

In the very beginning, I thought It’s bug with the Brian2, but after I checked all details with Brian2, I found no bug. I even tune the dt to 1ns and not work.

And I turn to this platform to ask for help just to make sure I didn’t miss anything important.

But when I waitting for reply, I just go through the dataset itself, and found that it do have the same value! The same index of neuron fire twice in the same timestamp!

The above are example picture from the first file in the dataset, which I have changed the form from the original data. But I have looked back and found the same in the original data.

I didn’t contect with the author of the N_MNIST, but I think this may cause by jitter during data collect with camera in high speed, and they didn’t verify all the data.

After I fixed this jitter, I just delete the Topic I raised. And may this help you!

2 Likes

After @mstimberg’s reply, I started looking into the data in more detail, as it totally looked like either I was making some mistake at reading the data or maybe the data was not completely clean (a possibility I didn’t consider in the first place).

Indeed, I noticed that sometimes, two events happened at the very exact timestamp down to the microsecond level (so that’s why, even with a microsecond timestep, it would keep raising the same error). That’s of course something that shouldn’t happen, but I wasn’t 100% sure those duplicated events were already in the original N-MNIST data, or were a consequence of how I loaded the dataset into my code, so that was what I was going to do next.
But luckily, @SEU_diff already has the answer for that! Knowing that it is a problem with the dataset, I have implemented a filter in the reading functions to ensure there is a reasonable time difference between two consecutive events for the same neuron (0.5 miliseconds), and that seems to have fixed it.

So problem solved! I would like to thank both of you for your answers, they were very helpful

1 Like

Great, happy you solved it and thanks @SEU_diff for sharing your findings!