ZeroDivisionError: float division

Description of problem

I am getting the error mentioned in the title when running my simulation of a custom Hodgkin-Huxley network.
The simulation files are not attached due to size restrictions and dependencies with outside files. I can provide snippets of code (i.e. the equations module and the group declarations below).

Minimal code to reproduce problem

The code is complex and has multiple dependencies to other files, i.e. neuron positions from numpy saved arrays and configuration files. It also depends on initial states and network configurations. I apologize in advance for what follows, as I have attached my code in the following file: brian2_error.zip.py (615.3 KB)
NOTE: I was not allowed to upload .zip files so I renamed it to .py. To extract you will have to remove the .py extension and treat it as a normal zipped file.

I believe the error lies in the differential equations (i.e. HH_equations.py) for the inhibitory neurons (inh_inp_eqs and inh_eqs). However, in case you would like to run the model itself, the full code is provided.

What you have aready tried

  1. Replaced the exp() calls with exprel(), as per Marcel’s suggestion in an older post.
  2. Monitored the values m, n, h in the faulty group (EC_inh group); the m value jumps to negative at the end of the simulation provided for neuron #313 (accessed through G_flat[7].m[313])
  3. Tried reducing the integration time step to 0.01ms and running the simulation with the RK4 integration method instead the exponential Euler, as per Marcel’s suggestion in older posts. Didn’t solve the issue.
  4. I replaced in my equations all integer divisions with floating point divisions (explicitly). This removes the warnings that brian2 provides when encountering an integer division, but produces the ZeroDivisionError. Don’t know if it is relevant.

Also, I read the topics below:

These seemed very relevant, so I followed them but with no success. I rewrote my equations (paying attention to the signs), lowered the time step, changed my exp() to exprel(), but still get the errors.

Expected output (if relevant)

Actual output (if relevant)

Division by Zero of a single neuron, caused by its m value going negative over a single timestep.

Full traceback of error (if relevant)

You will find attached the corresponding log file. Not for the faint of heart. I followed the trace myself but I couldn’t pinpoint the error. It looks like an integration method error, yet it doesn’t go away when I test other integration methods.
brian_debug_q7mz5oe4.log (3.0 MB)

Apologies for the long post. I tried to be as thorough as possible, but if you feel as if something is missing, I will gladly provide more information.

Thank you all in advance.

Hi @nvar . That’s a thorough report – you certainly “did your homework” :clap:. I don’t have time to run things myself right now, but here are a few techniques that can be really helpful to debug this kind of problem (and apologies that this is not really documented anywhere, I really want to enable this all automatically with some kind of debug mode in the future):

  • Use Python code generation instead of Cython: prefs.codegen.target = 'numpy'
  • Switch off some optimization that makes the link between code and equations less obvious: prefs.codegen.loop_invariant_optimisations = False
  • Make numpy raise errors for all kind of floating point problems, including division by 0: np.seterr(all='raise')

If you then run your simulation, you should get an error pointing to the line where the problem occurs. If you run this in an interactive console (or a jupyter notebook etc.), you can then verify the values of all variables used in the equation and should be able to see where exactly the division by zero occurs.

Hope that helps, best
Marcel

1 Like

Oh, I realized that this might not be as helpful as I thought, since you seem to already know why the division by zero occurs… Another remark: if m becomes negative, then something has to be wrong with m_inf, alpha_m or beta_m. In principle, everything could be correct but alpha_m and beta_m are so big that you have a time constant tau_m smaller then your simulation time step dt. But since you already tried a dt of 0.01ms, this would still mean something in the equations/parameters is wrong, because the gating variable isn’t supposed to change that fast.

1 Like

Thank you for the prompt reply! I have already done some tests and run the simulations in interactive mode, so whenever I reach an error point I get access to the monitors and the values of the faulty group. It seems to be an issue in the sodium current, which unfortunately I cannot seem to pinpoint. I forgot to mention this in the post but I realized that I get a RuntimeWarning: overflow encountered in exp whenever I try to print the values of the gating variables m or h. The following lines are outputted from another simulation which produces the same error, but it gives some insight as to what happens:

>>> state_mon_I_all[3][0].m[963,-10:]

array([ 0.69793683, 0.68324778, 0.66321509, 0.63916972, 0.61272521, 0.58552021, 0.55904854, 0.53453397, -7.07801432, -2.29532629])

The fact that the change occurs over a single timestep led me initially to think it is an integration method issue, which I am not so sure is the problem anymore, especially having changed the timestep and integration methods.

But even so, thank you again for your time and your suggestions!

Update: I enabled the debugging preferences, as per your suggestion, and re-run the code with NumPy instead of Cython, optimization turned off, and NumPy error-raising for all floating point problems. This led to a new error showing up, regarding the synaptic connection probabilities in two groups (the error is FloatingPointError: underflow encountered in exp, located at

File "/.../model/setup.py", line 85, in connect_intra syn_IE.connect(p=str(p_conn[N_py+inhpop][pypop])+'*exp(-((x_soma_pre-x_soma_post)**2+(y_soma_pre-y_soma_post)**2+(z_soma_pre-z_soma_post)**2)/(2*(350*umetre)**2))') # connect I2E

After manually calculating the distances and scaling with the probability 0.54 in between the populations using a nested for-loop (that took 15’ to run) I found out that the minimum and maximum values are found in (0.5398…, 0.5399…).

What I do not understand is how can this be causing an error. Any overflow would lead to a huge number being generated, especially since all numbers are in float64 format. Does this indicate the overflow is not important or that it has been already handled and NumPy is being over-sensitive?

For reference, my distance-based connectivity is based upon the tutorial here (probabilistic synapses). Could it be an issue with the exponential function and the units? A distance difference will still carry the umetre unit, which will then be raised to the power of 2. Could that be causing the issue?

Thanks in advance!

Apologies, my suggestion wasn’t ideal. Underflows are usually not much of an issue, e.g. np.exp(-1000) will raise this error, since e^{-1000} is a very small number that cannot be represented with a float64. In your case, you do not care if the probability of connecting two neurons that are very far away is represented as a tiny number or simply 0 (which will happen due to the underflow).

So I’d suggest to use np.seterr(all='raise', under='ignore') instead of the catch-all np.seterr(all='raise').

Sorry about the unnecessary complication…
Best,
Marcel

Are you sure that this calculation is correct? Things should worry in a much wider range, I think.