Float division / setting sigma to scale noise

Description of problem

Hi,
I am currently generating data using the same code for different models from json files (Morris-Lecare and 2 versions of Wang-Buzsaki (WB)). I generated data with deterministic models and am now introducing noise. For the Morris-Lecare, this is not a problem. For WB, I get “ZeroDivisionError: float division” error eventually, usually due to an underflow/overflow. My question is if there is a good way to handle this? Or is there something really obvious that I am missing somehow?
(It is also a little odd because if I change the parameter sigma after reading in the json, I get problems even for sigma = 0, but before, I ran the code often without any errors…)
Thanks in advance for any help!

Minimal code to reproduce problem

This is not the full code, just parts of my model. I can upload it but it is spread across the whole file structure so maybe there is an obvious solution to this already:
odes:
“dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm)”,
“dh/dt = phi * 5*(hAlpha * (1-h) - hBetah)",
"dn/dt = phi * 5
(nAlpha * (1-n) - nBetan)"
functions:
“hAlpha”: “0.07 * exp(-(v/mV+58.0)/20.0)/ms”,
“hBeta”: "1.0/( 1.0 + exp(-0.1
(v/mV+28.0)) )/ms”,
“mInf”: “1/ exprel(-0.1*(v/mV+35.0)) /((1/ exprel(-0.1*(v/mV+35.0))) + 4.0* exp(-(v/mV+60.0)/18.0))”,
“nAlpha”: “0.1 * 1/ exprel(-0.1*(v/mV+34.0+bn))/ms”,
“nBeta”: “0.125* exp(-(v/mV+44.0+bn)/80.0)/ms”

What you have already tried

  1. I used “euler” and “heun” as integration methods with dt = 0.001 * ms
  2. I checked for where my error occurs with
    brian2.prefs.codegen.target = (“numpy” )
    brian2.prefs.codegen.loop_invariant_optimisations = False
    np.seterr(all=“raise”)
  3. I monitored the v,n, (h), and the error occurs when v is close to -35mV and read the corresponding post ZeroDivisionError when changing synaptic weights of large network with HH neurons
  4. I replaced exp() with exprel() as far as possible

Traceback of error

example1:
FloatingPointError: underflow encountered in exp
The above exception was the direct cause of the following exception: BrianObjectException Traceback (most recent call last) …

An exception occured during the execution of the ‘run’ block of code object neurongroup_2_stateupdater_codeobject. The error was raised in the following line: _h = (dt * (((_lio_2 * (h * exp(_lio_3 * v))) + (_lio_4 * exp(_lio_3 * v))) - ((_lio_5 * h) / (1.0 + (0.0608100626252179 * exp(_lio_6 * v)))))) + h (See above for original error message and traceback.)

example2:
FloatingPointError: underflow encountered in multiply
The above exception was the direct cause of the following exception: BrianObjectException
Traceback (most recent call last)…

An exception occured during the execution of the ‘run’ block of code object neurongroup_3_stateupdater_codeobject.
The error was raised in the following line:
_v = (((((1.0 * (cm ** 2)) * sigma) * xi) / uF) + (dt * (((((((((((((((- 82534.375) * (cm ** 2)) * h) * mV) * msiemens) / (((((((((((((((((((((((((((((((((((((((((((…
(See above for original error message and traceback.)

Hi @Maren, I was on holidays when you posted this, so this is why I am only replying now. I think this is most likely an issue in Brian, where it replaces exprel, although it shouldn’t (see Revisit sympy processing of exprel · Issue #1350 · brian-team/brian2 · GitHub). Could you give a minimal runnable example (e.g. just your neuron model with a constant current input) that shows the issue so that I can investigate? You did all the investigations I’d usually recommend already, though :blush:

As mentioned in the GitHub issue I linked earlier, the way exprel is handled can change with subtle changes to the equations (just to be clear: this is considered a bug) – you might try to replace e.g. 35.0 in your equations by 35 and see whether this changes anything :grimacing:

Thank you for your reply! deleted all unnecessary floats and looked at both the equations with only exp() and with exprel(). For some constant currents (at least, I did not try them systematically), this helps. However, with noisier stimuli, it usually breaks with the same errors as before for both sets of equations. Also, at least for all the settings I tried (random seed, current deviation etc.), it breaks at the same time with and without exprel(). I gave an example of the code below.

(I am using brian2 2.5.1 and sympy 1.12 on Debian GNU/Linux 11 (bullseye).)

example:

import brian2 as b2
from brian2.units import *
import numpy as np
import matplotlib.pyplot as plt

b2.start_scope()
b2.seed(42)
np.random.seed(42)

phi = 1
Cm = 1 * (uF / cm**2)
EK = -90 * mV
EL = -65 * mV
ENa = 55 * mV
I_DC = 0.211966872339 * uA / cm2
I_onset = 0.1601 * uA / cm2
bn = 0
gK = 9 * msiemens / cm**2
gL = 0.1 * msiemens / cm**2
gNa = 35 * msiemens / cm**2
v_peak = 0 * mV
sigma_norm = 12.383091926574707 * (cm2 / uA)
phase_noise = 0.2 * ms**0.5
sigma = phase_noise / sigma_norm

# with exprel
eqs = """
IK = gK*n**4*(EK-v): amp/ meter**2
IL = gL*(EL-v) : amp/ meter**2
INa= gNa*mInf**3 * h*(ENa-v) : amp/ meter**2
I = stimulus(t, i): amp/metre ** 2
hAlpha=0.07 * exp(-(v/mV+58)/20)/ms :Hz
hBeta=1/( 1 + exp(-0.1*(v/mV+28)) )/ms :Hz
mInf=1/ exprel(-0.1*(v/mV+35)) /((1/ exprel(-0.1*(v/mV+35))) + 4* exp(-(v/mV+60)/18)) :1
nAlpha=0.1 * 1/ exprel(-0.1*(v/mV+34+bn))/ms :Hz
nBeta=0.125* exp(-(v/mV+44+bn)/80)/ms :Hz
dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm) :volt
dh/dt = phi * 5*(hAlpha * (1-h) - hBeta*h) :1
dn/dt = phi * 5*(nAlpha * (1-n) - nBeta*n) :1"""

integration_method = "euler"
N_neurons = 1
dt_sim = 0.001 * ms
duration = 500 * ms

neurons = b2.NeuronGroup(N_neurons, eqs, threshold="v > v_peak" ,refractory="v >= v_peak", method=integration_method, dt=dt_sim,)

# initial states
neurons.v = -64.01756491 * mV
neurons.h = 0.7807915510055807
neurons.n = 0.08907801030754373

# generate step stimulus
deviation = 14 * 0.212 * uA / cm2
step_length = 0.2 * ms
number_of_values = int(duration / step_length)
values = I_onset + deviation * np.random.uniform(low=-1.0, high=1.0, size=(number_of_values, N_neurons))
N_dt = np.round(duration / dt_sim)
reps = N_dt / number_of_values
values = np.repeat(values, reps, axis=0)
stimulus = b2.TimedArray(values=values, dt=dt_sim)

monitored_variables = ["v", "h", "n", "I"]
states = b2.StateMonitor(neurons, monitored_variables, record=True, dt=dt_sim)
b2.run(duration)

Without exprel(), I use this set of equations:

eqs= """
IK = gK*n**4*(EK-v): amp/ meter**2
IL = gL*(EL-v) : amp/ meter**2
INa= gNa*mInf**3 * h*(ENa-v) : amp/ meter**2
I = stimulus(t, i) : amp/metre ** 2
hAlpha=0.07 * exp(-(v/mV+58)/20)/ms :Hz
hBeta=1/( 1 + exp(-0.1*(v/mV+28)) )/ms :Hz
mInf=(-0.1*(v/mV+35)/ ( exp(-0.1*(v/mV+35)) - 1 ) /(-0.1*(v/mV+35)/ ( exp(-0.1*(v/mV+35)) - 1 ) + 4* exp(-(v/mV+60)/18))) :1
nAlpha=-0.01*(v/mV+34+bn)/ ( exp(-0.1*(v/mV+34+bn)) - 1 )/ms :Hz
nBeta=0.125* exp(-(v/mV+44+bn)/80)/ms :Hz
dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm) :volt
dh/dt = phi * 5*(hAlpha * (1-h) - hBeta*h) :1
dn/dt = phi * 5*(nAlpha * (1-n) - nBeta*n) :1"""

Many thanks for the example, this is very useful. It looks as if the code generation using the Euler algorithm for stochtastic equations generates some extremely convoluted code, which leads to the division by zero as a side effect. I will investigate this further, but I think for now the best solution is to remove the xi from the equations (so that from Brian’s point of view, they are “deterministic”), and manually create random numbers for the noise. Note that this is exactly what the Euler algorithm is (or rather should be) doing, so this isn’t introducing any loss of accuracy. In your example, you can do this by replacing:

'''
...
dv/dt = (I + sigma * xi + IL + INa + IK)/(Cm) :volt
...
'''

by

'''
....
noise = sigma/sqrt(dt) * randn() : amp/meter**2 (constant over dt)
dv/dt = (I + noise + IL + INa + IK)/(Cm) :volt
...
'''

The (constant over dt) flag here means that it generates a single random value for each time step – this is what all the methods that we are currently supporting for stochastic equations do anyway. In some quick tests, it seems to simulate the equations without issues then, regardless of whether you use exprel or not.

You could also add (constant over dt) to the definition of mInf (which again doesn’t change anything when you use Euler, since the Euler algorithm only evaluates everything once per time step in any case), which would allow you to use the exponential_euler algorithm. With this algorithm, simulations should be stable and accurate with larger time steps than what you could use for Euler.

Hope this helps!

This helps a lot, thank you!

1 Like