Hi,
I am currently generating data using the same code for different models from json files (Morris-Lecare and 2 versions of Wang-Buzsaki (WB)). I generated data with deterministic models and am now introducing noise. For the Morris-Lecare, this is not a problem. For WB, I get “ZeroDivisionError: float division” error eventually, usually due to an underflow/overflow. My question is if there is a good way to handle this? Or is there something really obvious that I am missing somehow?
(It is also a little odd because if I change the parameter sigma after reading in the json, I get problems even for sigma = 0, but before, I ran the code often without any errors…)
Thanks in advance for any help!
Minimal code to reproduce problem
This is not the full code, just parts of my model. I can upload it but it is spread across the whole file structure so maybe there is an obvious solution to this already:
odes:
“dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm)”,
“dh/dt = phi * 5*(hAlpha * (1-h) - hBetah)",
"dn/dt = phi * 5(nAlpha * (1-n) - nBetan)"
functions:
“hAlpha”: “0.07 * exp(-(v/mV+58.0)/20.0)/ms”,
“hBeta”: "1.0/( 1.0 + exp(-0.1(v/mV+28.0)) )/ms”,
“mInf”: “1/ exprel(-0.1*(v/mV+35.0)) /((1/ exprel(-0.1*(v/mV+35.0))) + 4.0* exp(-(v/mV+60.0)/18.0))”,
“nAlpha”: “0.1 * 1/ exprel(-0.1*(v/mV+34.0+bn))/ms”,
“nBeta”: “0.125* exp(-(v/mV+44.0+bn)/80.0)/ms”
What you have already tried
I used “euler” and “heun” as integration methods with dt = 0.001 * ms
I checked for where my error occurs with
brian2.prefs.codegen.target = (“numpy” )
brian2.prefs.codegen.loop_invariant_optimisations = False
np.seterr(all=“raise”)
example1:
FloatingPointError: underflow encountered in exp
The above exception was the direct cause of the following exception: BrianObjectException Traceback (most recent call last) …
An exception occured during the execution of the ‘run’ block of code object neurongroup_2_stateupdater_codeobject. The error was raised in the following line: _h = (dt * (((_lio_2 * (h * exp(_lio_3 * v))) + (_lio_4 * exp(_lio_3 * v))) - ((_lio_5 * h) / (1.0 + (0.0608100626252179 * exp(_lio_6 * v)))))) + h (See above for original error message and traceback.)
example2:
FloatingPointError: underflow encountered in multiply
The above exception was the direct cause of the following exception: BrianObjectException
Traceback (most recent call last)…
An exception occured during the execution of the ‘run’ block of code object neurongroup_3_stateupdater_codeobject.
The error was raised in the following line:
_v = (((((1.0 * (cm ** 2)) * sigma) * xi) / uF) + (dt * (((((((((((((((- 82534.375) * (cm ** 2)) * h) * mV) * msiemens) / (((((((((((((((((((((((((((((((((((((((((((…
(See above for original error message and traceback.)
Hi @Maren, I was on holidays when you posted this, so this is why I am only replying now. I think this is most likely an issue in Brian, where it replaces exprel, although it shouldn’t (see Revisit sympy processing of exprel · Issue #1350 · brian-team/brian2 · GitHub). Could you give a minimal runnable example (e.g. just your neuron model with a constant current input) that shows the issue so that I can investigate? You did all the investigations I’d usually recommend already, though
As mentioned in the GitHub issue I linked earlier, the way exprel is handled can change with subtle changes to the equations (just to be clear: this is considered a bug) – you might try to replace e.g. 35.0 in your equations by 35 and see whether this changes anything
Thank you for your reply! deleted all unnecessary floats and looked at both the equations with only exp() and with exprel(). For some constant currents (at least, I did not try them systematically), this helps. However, with noisier stimuli, it usually breaks with the same errors as before for both sets of equations. Also, at least for all the settings I tried (random seed, current deviation etc.), it breaks at the same time with and without exprel(). I gave an example of the code below.
(I am using brian2 2.5.1 and sympy 1.12 on Debian GNU/Linux 11 (bullseye).)
Many thanks for the example, this is very useful. It looks as if the code generation using the Euler algorithm for stochtastic equations generates some extremely convoluted code, which leads to the division by zero as a side effect. I will investigate this further, but I think for now the best solution is to remove the xi from the equations (so that from Brian’s point of view, they are “deterministic”), and manually create random numbers for the noise. Note that this is exactly what the Euler algorithm is (or rather should be) doing, so this isn’t introducing any loss of accuracy. In your example, you can do this by replacing:
'''
...
dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm) :volt
...
'''
by
'''
....
noise = sigma/sqrt(dt) * randn() : amp/meter**2 (constant over dt)
dv/dt = (I + noise + IL + INa + IK)/(Cm) :volt
...
'''
The (constant over dt) flag here means that it generates a single random value for each time step – this is what all the methods that we are currently supporting for stochastic equations do anyway. In some quick tests, it seems to simulate the equations without issues then, regardless of whether you use exprel or not.
You could also add (constant over dt) to the definition of mInf (which again doesn’t change anything when you use Euler, since the Euler algorithm only evaluates everything once per time step in any case), which would allow you to use the exponential_euler algorithm. With this algorithm, simulations should be stable and accurate with larger time steps than what you could use for Euler.