Running simulation in microsecond precision

Description of problem

How to increase the precision from ms to us? Or is this against the nature of a spiking network that it’s supposed to work in millisecond scales only?

Minimal code to reproduce problem

from brian2 import *

eqs = ‘’’
dv/dt = (I-v)/tau : 1
I : 1
tau : second
‘’’

G = NeuronGroup(1, eqs, threshold=‘v>1’, reset=‘v = 0’, method=‘exact’)
G.I = [2]
G.tau = [500]*us

H = NeuronGroup(2, eqs, threshold=‘v>1’, reset=‘v = 0’, method=‘exact’)
H.I = [0, 0]
#Theoretically these two neurons have slightly different firing time
H.tau = [1000, 900]*us

#Creating a simple network. One neuron connects to two other neurons with same weight
S = Synapses(G, H, model=‘w : 1’, on_pre=‘v_post += w’)
S.connect(i=0,j=[0,1])
S.w = [0.5,0.5]

net = Network(collect())
net.add(S)

H_spike_mon = SpikeMonitor(H)
net.add(H_spike_mon)
net.run(5000*us)

print(H_spike_mon.t)

What you have already tried

Dividing the time by us didn’t work. It seems that the results were saved with only 1 decimal precision.
print(H_spike_mon.t/us)

Expected output (if relevant)

Showing the more precise firing time. At least with 3 decimals.
For example, <spikemonitor.t: array([1.599, 1.601, 3.199, 3.201, 4.799, 4.801]) * msecond>

Actual output (if relevant)

<spikemonitor.t: array([1.6, 1.6, 3.2, 3.2, 4.8, 4.8]) * msecond>

Full traceback of error (if relevant)

Changing the simulation time step

defaultclock.dt=1*us

should do it, although this is would be a very fine-grained simulation

Thank you.
But I just realized that both neurons would still fire at the same time no matter how precise the simulation is. It’s because the on_pre is set to v+=0.1. Even the second neuron has a slightly faster discharging rate, the 0.1V increment would still make both neurons reach the threshold from the same spike.
Anyway, now I know how to change the simulation time.

yeah, the on_pre='v+=0.1' can be thought of as a synapse model with “infinitely fast” rise time, even though their decay rates are different.

If you’re interested in more nuanced synaptic integration (with finite rise times), you should look at “alpha” or “bixexponential” synpases, which are implemented here:
https://brian2.readthedocs.io/en/stable/user/converting_from_integrated_form.html
you can read more about them in this textbook chapter, especially figure 2

1 Like