# Using the "Siegert formula" for predicting rate in LIF networks

I’m working my way through the excellent textbook “Neuronal Dynamics” by Gerstner et al. to arrive at a closed-form solution for how the rates of LIF networks depend on the stimulus amplitude (given a bunch of other parameters).

Chapter 12 does a good job of building towards and answer to this question and arrives at something called “the Siegert formula”
eq 8.54
eq 12.26

Currently I’m implementing this in python and comparing the prediction to the result of Brian simulations. I’m using scipy’s integrate.quad function to compute the result… but i’m running into numerical issues with small noise values.

I think the issue arises from integrating an \exp(x^2) term that explodes beyond numerical precision (or 1/\exp(x^2) for very small x) but i’m not sure.

It looks like there’s an alternate, approximate formulation of this relationship proposed here: “Noisy Softplus: an activation function that
enables SNNs to be trained as ANNs”

and I just found some discussion of “A.1. Numerical Evaluation of the Siegert Formula” in “Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator”
where the authors mention:

We here describe how to numerically calculate Equation (27),frequently called Siegert formula in the literature (for a recent textbook seeGerstner et al., 2014), which is not straightforward due to numerical instabilities in the integral.

Have other people tried this within the Brian community before? Is there an existing (preferably python) implementation of this we could use “off the shelf”?

If I can replicate the Diesmann et al. solution would that be useful to have as

• a short python code snippet hosted on github and referenced in Brian documentation?
• a short (jupyter) notebook demonstrating how to predict firing rate?
• a miscellaneous utility function added to the Brian codebase?

There’s some discussion of this over on the nest github
“Fast and safe Siegert implementation #1811”
where AlexVanMeegan discusses an alternate proposal to Diesmann et al. which reformulates the integral instead of splitting it to avoid some numerically unstable range.

After some discussion over there, it seems like they concluded Alex’s solution was even better!
Unfortunately for us, it’s implemented in C++, but having this reference implementation, hopefully it should be pretty straightforward to port to python

I now have Alex’s approach implemented in python (and a more detailed derivation, there were some parts of the original I couldn’t quite follow)

notebook coming soon

1 Like

Thanks for sharing this, this should be very helpful to others!

I think it is a bit too specific to have it directly in the Brian package, but I see a couple of options (maybe we should open a new thread to discuss this in more general terms):

• if you are willing to “polish” this a bit and make it general, it could go into `brian2tools`
• If it is not too complex/specific (in particular the numerical calculation), it could go as an example into the Brian documentation
• It might be useful to have a more “low threshold” option to share bits and pieces like this. It could be a wiki, a link list on our website, or maybe just a thread here on the discussion forum where everyone replies with a link to something they did?
• If you demonstrate things with a jupyter notebook, we could easily turn it into a blog post for the website which are jupyter notebooks as well