Hi!

First - I’ve placed this in the Science section because it’s mainly about model choice, but it also revolves around implementation in Brian, so apologies if it’s misplaced.

I’m trying to reproduce a paper which uses a spike-response-like neuron model. In the paper, the membrane potential of an excitatory neuron i is given as

\begin{align} u_i(t) = \sum_{j} w^{inpE}_{ij} y_j(t) + \sum_{j} w^{EE}_{ij}z_j(t) - \sum_{j}w^{IE}_{ij}h(t) + E_{gen} + E_i, \end{align}

where the PSPs are given as

\begin{align} y(t) = \sum_{t_f} \epsilon(t - t_f) \end{align}

for presynaptic spike times \{t_f\}, and response kernel

\begin{align}
\epsilon(t) = K\left( e^{-t/\tau_f} - e^{-t/\tau_r}\right),
\end{align}

where the constant K is chosen for a peak value of 1. The quantities E_{gen} and E_i represent generic excitability of the excitatory population and the random individual excitability of neuron i (drawn lognormally) respectively.

Summed postsynaptic potentials z and h are defined identically, but for different presynaptic spike times (from excitatory and inhibitory afferents respectively).

The original model has a stochastic firing criterion (without an adaptive threshold), with an instantaneous firing rate given by the transfer function

\begin{align}
r(t) = r_0 \exp\left(u(t)/4\right).
\end{align}

The original paper was implemented in NEST, and I’m fairly certain they used the model `pp_psc_delta`

given here. Model parameters are set to the mean values in this paper. Note that with the given transfer function, this model would correspond to the `pp_psc_delta`

model with `c_1 = 0`

.

In replicating the model in Brian, I’m using the equations given in the documentation:

```
dv/dt = (A*s - v + exc_gen + exc_i) / tau_f : 1
ds/dt = -s / tau_r : 1
```

with presynaptic spike update

```
s_post += w.
```

Here, `A`

is the scaling factor (a function of the time constants \tau_f, \tau_r, given in the documentation) ensuring a peak PSP value of 1, and `exc_gen`

and `exc_i`

are the quantities E_{gen} and E_i.

Since thresholds can only be given as explicit spike conditions, I used that (given in the NEST model documentation) the instantaneous spike probability of the NEST model is given as

P(\text{spike}) = 1 - \exp(-r h) for instantaneous firing rate r and simulation time step h.

I matched this with the instantaneous probability of firing for a model with threshold \theta = V_{\text{th}} + \xi, for soft threshold V_{\text{th}} and noise \xi from a distribution with pdf \rho(\xi):

P(\text{spike}) = \int_{\mathbb{R}} H(V - V_{th} - \xi)\rho(\xi) d\xi

Below, see the v - P(\text{spike}) curve from the original model (blue) vs v - P(\text{spike}) curves from a matched normal distribution (orange, dashed) and a matched skewnormal distribution (green, dashdot), along with the estimated PDF of the CDF given by the v-P(\text{spike}) curve from the original model.

The threshold is implemented one of two ways: either I use a Gaussian noisy threshold with

`threshold = 'v > theta + sigma*randn()'`

(for \theta, \sigma of the matched normal distribution), or a skewnorm threshold implemented as a network operation, where threshold values are generated as variates from the skewnormal distribution matched to the estimated PDF shown above.

The model is not behaving as it should, and in going through the network model piece by piece to check for errors, I’ve noticed a few areas where I’m not certain about choices made in matching the model to the paper.

First, I’m not sure how the spike and reset mechanism of spike-response models changes the details of implementation in Brian. In my current implementation, the membrane potential is reset to

`v = exc_gen + exc_I`

after a spike. However in some sources (e.g. in the Jolivet et al. paper linked above), spike-response models are implied to not explicitly reset the membrane potential after a spike.

Second, I’ve assumed that the v-P(\text{spike}) curves are CDFs and matched the threshold noise distributions to the distribution whose CDF matches the given v-P(\text{spike}) curve. This assumption might be wrong, but I don’t feel confident on why it might or might not be.

Finally, on the implementation level - are there any ways to implement a skewnorm distributed threshold (or in general, any non-normally distributed noisy threshold) without preventing the use of standalone mode? The current implementation through network operations does not allow this.

I hope this post hasn’t been too rambling - any comments or input on the steps I’ve taken to match the model would be greatly appreciated.

Thanks!