Support for multiple spikes per time bin of the same neuron?


Why doesn’t Brian2 support multiple spikes per time bin of the same neuron? E.g. as in the test case here:

I have spike times as input on to which I add some Gaussian jitter and it often happens that I hit the case of multiple spikes per bin.

Is there any workaround?



Hi @schmitts.

Why doesn’t Brian2 support multiple spikes per time bin of the same neuron?

The reason is simply that the code uses algorithms that are only correct under this assumption. Similarly, the data structure storing the indices of all neurons of a group that spiked during a time step has a fixed size. This size is simply the number of neurons in that group since – under the assumption that a neuron does not spike multiple times during a bin – this is the maximum number of spikes during a time step. I guess it would be principle to remove this assumption, but I don’t feel that the gain is worth the cost.

In general, the SpikeGeneratorGroup is supposed to represent a group of neurons, and a neuron would usually not spike twice within a very short time frame (e.g. the default 0.1ms). If the input spikes are supposed to come from different neurons, then the SpikeGeneratorGroup should have multiple neurons, too. To completely avoid the issue for an array of (jittered) spike times, you could use

spike_times = ...
n_spikes = len(spike_times)
SpikeGeneratorGroup(n_spikes, np.arange(n_spikes), spike_times)

This will of course lead to a large number of synapses when you connect the group to something, but performance-wise that shouldn’t matter much, I think (but of course, if you connect this to a very big group of neurons, this might lead to memory issues). For a sorted list of spike times where you only have a few cases of overlapping spikes, you could also simply use two neurons:

spike_times = ...
n_spikes = len(spike_times)
SpikeGeneratorGroup(2, np.repeat([0, 1], n_spikes//2), spike_times)

(assuming an even number of spike times…)

Hope that helps, best

1 Like

Hi Marcel!

Thanks for the explanations and the suggested workaround!