Error massage from SpikeGeneratorGroup

Description of problem

I’m trying to feed a simple cortical model with spike time of LGN neurons. Spike times are in 2D array with 2 columns time and neuron index dLGNspk.npz

Minimal code to reproduce problem

from brian2 import *
from numpy import random as rnd

with load("dLGNspk.npz") as npd:
    LGNspk = npd["arr_0"]
nLGN = int(ceil(amax(LGNspk[:,1])))+1
print(" > Checking that all interspike intervals that are longer than 2 simulation steps")
lastspike = zeros(nLGN)
for xid,(spt,spi) in enumerate(LGNspk):
    if abs(lastspike[int(spi)]-spt) < 0.5 :
        print(f"   > LGN neuron #{spi} still have ISI shorter than two simulation steps: {lastspike[int(spi)]}, {spt}")
        exit(1)
    lastspike[int(spi)]=spt
#savez("dLGNspk.npz",LGNspk)
print("   > All ISI are linger than 2 simulation steps")


lgnminsyn,lgnmaxsyn = 107,267
print(f"Range for number of synapses from one lgn neuron [{lgnminsyn},{lgnmaxsyn}]")
connectivity=array([
    [s,rnd.randint(0,600)]
    for s in range(nLGN) for p in range(lgnminsyn,lgnmaxsyn)
]).astype(int)
ninputs = zeros(600)
for l,c in connectivity:
    ninputs[c] += 1
print(f"Minimal   number of connections per CX neuron {amin(ninputs)}")
print(f"Mean+-std number of connections per CX neuron {mean(ninputs)}+-{std(ninputs)}")
print(f"Maximal   number of connections per CX neuron {amax(ninputs)}")

Esyn   = 0.  * mV
tau_sr = 0.5 * ms
tau_sd = 2   * ms

#prepostSTDP params: 
Altp = 14e-5* mS
Altd =  8e-5* mS
thetad = -70.6 * mV
thetap = -45.3 * mV 
uref2  = 0.95e-3 #Murata and Colonnese • Thalamic Control of Cortical State Development 0.95 spike/sec
tau_u2 = 1000*ms

tau_xp =  15*ms
tau_p  =  7*ms
tau_m  = 10*ms
wmin,wmax = 0*mS, 200*nS
#defaultclock.dt = 1*ms

# Adex Parameters
C      = 281 * pF
gL     = 30 * nS
taum = C / gL
EL     = -70.6 * mV
DeltaT = 2 * mV
vti    = -50.4 * mV
tauvt  = 50 * ms
taua   = 144.*ms
a      = 4 * nS
b      = 0.805*pA
vtrest = -5 * mV
VTmax  = 18 * mV


tauw, c, b, Vr = 144 * ms, 4 * nS, 0.0805 * nA, -70.6 * mV # Regular spiking (as in the paper)

eqs_neuron = """
dvm/dt=(gL*(EL-vm)+gL*DeltaT*exp((vm-vt)/DeltaT)+gs*(Esyn-vm)*(sd-sr)*ms/(tau_sd-tau_sr)-va )/C : volt
dvt/dt=-(vt-vti)/tauvt                                  : volt # Threshold votage
dva/dt =(a*(vm-EL)-va)/taua                             : amp  # voltage adaptation current
dsr/dt=-sr/tau_sr                                       : 1    # synaptic conductance rise
dsd/dt=-sd/tau_sd                                       : 1    # synaptic conductance decay
dud/dt=(vm-ud)/tau_m                                    : volt # depression voltage low-pass filter u-
# du2/dt=(ud-u2)/tau_u2                                   : volt # depression second voltage low-pass filter u--
dup/dt=(vm-up)/tau_p                                    : volt # potentiation voltage low-pass filter u+
dxp/dt=-xp/tau_xp                                       : 1    # potentiation spike low-pass filter x+
du2/dt=(xp-u2)/tau_u2                                   : 1    # depression second spike low-pass filter (change in the model)

gs                                                      : siemens
"""

lgninput = SpikeGeneratorGroup(nLGN,LGNspk[:,1].flatten().astype(int),LGNspk[:,0]*ms)
neurons = NeuronGroup(600, model=eqs_neuron, threshold='vm>vt', reset="vm=Vr;a+=b;vt=VTmax;x+=1")
neurons.vt = vtrest
neurons.vm = EL
model='''dw/dt=Altp*xp_pre*clip((vm_pre-tetap)/mV,0,inf)*clip((up_pre-tetad)/mV,0,inf)-Altd*u2_post**2/uref2*clip((ud_pre-tetad)/mV,0,inf) : siemens (event-driven)'''
syn = Synapses(lgninput, neurons, model, on_pre="w=clip(w,wmin,wmax): siemens; gs_post = w : siemens (summed)")
syn.connect(i=connectivity[:,0].flatten(), j=connectivity[:,1].flatten())

SM = SpikeMonitor(neurons)
VM = StateMonitor(neurons, 'vm', record=[0,49,99])

    
run(2e4*ms,report="txt")



Actual output (if relevant)

python brianDB.py 
 > Checking that all interspike intervals that are longer than 2 simulation steps
   > All ISI are linger than 2 simulation steps
Range for number of synapses from one lgn neuron [107,267]
Minimal   number of connections per CX neuron 16.0
Mean+-std number of connections per CX neuron 29.866666666666667+-5.038738819276991
Maximal   number of connections per CX neuron 45.0
INFO       No numerical integration method specified for group 'neurongroup', using method 'euler' (took 0.02s, trying other methods took 0.18s). [brian2.stateupdaters.base.method_choice]
ERROR      Brian 2 encountered an unexpected error. If you think this is a bug in Brian 2, please report this issue either to the discourse forum at <http://brian.discourse.group/>, or to the issue tracker at <https://github.com/brian-team/brian2/issues>. Please include this file with debug information in your report: /tmp/brian_debug_fa9271y8.log  Additionally, you can also include a copy of the script that was run, available at: /tmp/brian_script_033f24m6.py You can also include a copy of the redirected std stream outputs, available at '/tmp/brian_stdout_bkulmqx1.log' and '/tmp/brian_stderr_xwxarr18.log'. Thanks! [brian2]
Traceback (most recent call last):
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/network.py", line 892, in before_run
    obj.before_run(run_namespace)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/input/spikegeneratorgroup.py", line 197, in before_run
    raise ValueError(f"Using a dt of {self.dt!s}, some neurons of "
ValueError: Using a dt of <spikegeneratorgroup.dt: 100. * usecond>, some neurons of SpikeGeneratorGroup 'spikegeneratorgroup' spike more than once during a time step.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/rth/rth-master/rth/simulations/Th20Hz/CorticalInmputs/brianDB.py", line 93, in <module>
    run(2e4*ms,report="txt")
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/units/fundamentalunits.py", line 2428, in new_f
    result = f(*args, **kwds)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/magic.py", line 373, in run
    return magic_network.run(duration, report=report, report_period=report_period,
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/magic.py", line 230, in run
    Network.run(self, duration, report=report, report_period=report_period,
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/base.py", line 293, in device_override_decorated_function
    return func(*args, **kwds)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/units/fundamentalunits.py", line 2428, in new_f
    result = f(*args, **kwds)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/network.py", line 1006, in run
    self.before_run(namespace)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/base.py", line 293, in device_override_decorated_function
    return func(*args, **kwds)
  File "/home/rth/.local/lib/python3.9/site-packages/brian2/core/network.py", line 894, in before_run
    raise BrianObjectException("An error occurred when preparing an object.", obj) from ex
brian2.core.base.BrianObjectException: Error encountered with object named 'spikegeneratorgroup'.
Object was created here (most recent call only, full details in debug log):
  File '/media/rth/rth-master/rth/simulations/Th20Hz/CorticalInmputs/brianDB.py', line 81, in <module>
    lgninput = SpikeGeneratorGroup(nLGN,LGNspk[:,1].flatten().astype(int),LGNspk[:,0]*ms)

An error occurred when preparing an object. (See above for original error message and traceback.)

As you can see above, that the input sequence pass the test, and there are no neurons with ISI less 0.5 ms

 > Checking that all interspike intervals that are longer than 2 simulation steps
   > All ISI are linger than 2 simulation steps

However, brain returns an error,

ValueError: Using a dt of <spikegeneratorgroup.dt: 100. * usecond>, some neurons of SpikeGeneratorGroup 'spikegeneratorgroup' spike more than once during a time step.

Hi @rth. Your test assumes that all spike times in the matrix are in ascending order, but there are at least a number of zero values violating this assumption (which of course all end up in the first time step):

>>> print(LGNspk[LGNspk[:, 1] == 0,0][:20])
[ 95856.62498666  96012.94998662  96281.97498656      0.
 209069.6000511  209500.9250515  209939.55005191 210185.02505214
 210664.12505258 210878.80005278 211067.90005296 211490.92505335
      0.         271149.67510892 273379.10011099 348083.62518057
 348457.67518091 348812.22518124 349201.42518161 349938.80018229]

Even after removing the zeros, there are still a number of places where the times are not in order:

>>> print(LGNspk[(LGNspk[:, 1] == 0) & (LGNspk[:, 0] != 0.), :][610:613, :])
[[1751156.30273884       0.        ]
 [1653799.40275253       0.        ]
 [1862511.0028033        0.        ]]

It is of course still possible that Brian does something wrong, but at the first sight I’d say that Brian is correct in complaining about the provided spike times/indices.

@mstimberg thank you for looking into my problem!
Oh, I thought I’d caught all ‘strange’ behavior of spike sorting.