Description of the problem
Currently, I’m in the process of modifying the well-known model from Diehl&Cook2015 to make it use a true event-based dataset (N-MNIST) instead of the standard MNIST followed by a rate conversion. For those unfamiliar with that paper and/or dataset, they’re described below:
Description of Diehl&Cook approach and N-MNIST dataset
In the original code, the images from MNIST are presented to the network by converting the pixel intensities into spike rates (which are generated by a PoissonGroup
). This happens in a looping fashion during training or testing: for each sample image, the Poisson group gets the computed converted rate, the network is run for a certain presentation time, and then for a certain resting time. After that, a new iteration starts with a new image from the dataset, and so forth. The PoissonGroup
fulfills the role of the input layer, and then is connected to the following layer in an all-to-all fashion.
N-MNIST is stored as an array of event-based images. That is, each element of the dataset (each event-based image) is in turn an array of events of variable length (as, depending on the image, more or fewer events were generated during the recording process). Each of those events consists of four values:
- The x coordinate of the event (the images are 34x34).
- The y coordinate.
- The polarity of the event (it can be ON or OFF). I’ll ignore this for now.
- The timestamp in microseconds.
That being said, I intend to use a SpikeGeneratorGroup
, which can take the events’ timing and location as arguments to produce spikes, to replace the PoissonGroup
and read the event data directly.
Yet, I encounter the the following error:
ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step.
Minimal code to reproduce the problem
Since the code needed to reproduce this error involves downloading the dataset and some other non-standard modules, I’ll only show the key lines of code causing the error. At the beginning of the script, I set the default clock to 0.5ms (defaultclock.dt = 0.5 * ms
). Then I load the dataset, define many functions and variables, as well as the layers of the network. The layer that causes problems is the input layer, which I define like this:
input_groups[name + 'e'] = b.SpikeGeneratorGroup(1156, event_coordinates, event_timestamps)
The size of the network is 1156 because of the dataset characteristics (the event-based images are 34x34). Then, event_coordinates
and event_timestamps
are 1D numpy arrays of integers (the latter multiplied by b.us
for dimensional coherence. Actually, I don’t care about those arrays at here. Later in the code, all the event samples are given as input to the SpikeGeneratorGroup
by using the set_spikes()
method, but with this line I only want to define the input layer. However, I need to provide some arrays as arguments or it will raise an error, but that’s the only reason those are there (I guess I could’ve given ...(1156, [0], [0]*b.us)
and it wouldn’t have made a difference).
Despite the weird naming, input_groups[name + 'e']
results in a single spike generator group (contained in a dictionary), so don’t pay attention to the [name + 'e']
part.
Later, a Network
object is created and run:
for obj_list in [neuron_groups, input_groups, connections, rate_monitors, spike_monitors, spike_counters]:
for key in obj_list:
net.add(obj_list[key])
... # some non-related variable definitions
net.run(0 * ms)
It is in that net.run(0 * ms)
where the error occurs. I am not even sure what the error is. On one hand, there is the
ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step.
.
For that, I’ve changed the timestep of the SpikeGeneratorGroup
, with no success. But then, later in the traceback it says:
Object was created here (most recent call only, full details in debug log):
File "/home/jul/PycharmProjects/Multimodal-SNN/Diehl&Cook_spiking_MNIST.py", line 488, in <module>
input_groups[name + 'e'] = b.SpikeGeneratorGroup(n_input, event_coordinates, event_timestamps)
I don’t know the meaning of such error or what is exactly the problem. Anyone could lend me a hand? Thanks a lot in advance!
PS: if more info about is needed I’ll gladly provide it, I just didn’t want to overflow the post with too much info.
What I have already tried
- I tried setting a very small
dt
for theSpikeGeneratorGroup
, 1 \mu s specifically, but I still got the same error. - I’ve also tried running the network for longer (i.e. 100ms instead of 0).
Full traceback of the error
Traceback (most recent call last):
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/core/network.py", line 897, in before_run
obj.before_run(run_namespace)
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/input/spikegeneratorgroup.py", line 200, in before_run
raise ValueError('Using a dt of %s, some neurons of '
ValueError: Using a dt of <spikegeneratorgroup.dt: 0.5 * msecond>, some neurons of SpikeGeneratorGroup "spikegeneratorgroup" spike more than once during a time step.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jul/PycharmProjects/Multimodal-SNN/venv/lib/python3.8/site-packages/brian2/core/network.py", line 899, in before_run
raise BrianObjectException("An error occurred when preparing an object.", obj) from ex
brian2.core.base.BrianObjectException: Error encountered with object named "spikegeneratorgroup".
Object was created here (most recent call only, full details in debug log):
File "/home/jul/PycharmProjects/Multimodal-SNN/Diehl&Cook_spiking_MNIST.py", line 488, in <module>
input_groups[name + 'e'] = b.SpikeGeneratorGroup(n_input, event_coordinates, event_timestamps)
An error occurred when preparing an object. (See above for original error message and traceback.)