I’m using Brian to explore a model originally implemented in NEST, and I’m wondering if a subtle difference between connection probability interpretations might be contributing to some problems I’m having.
See the connection probability equations from the original paper:
Note - here both excitatory and inhibitory neurons are arranged together randomly (uniformly) in a grid (with a fixed distance between neurons along each axis), so each neuron has three grid coordinates
(x, y, z).
Parameter values determining connection probabilities for each group are shown in table 1, below:
The connection probability equation looks to be in exactly the form required for straightforward use with Brian. However, when taking the parameter values from the above table, estimates of connection probability taken from the generated synaptic connectivity matrix are many times higher than desired. This is using an estimate function that correctly estimates average connectivities of synapses generated with p=c for constant c.
Initially I thought little of it and just tuned the parameters manually to produce the desired connection probabilities, but I’d like to remove any unnecessary departures from the original model, in case they cause problems down the road.
I’ve come back to this issue a number of times and have read relevant documentation of both simulators fairly extensively. However, I haven’t seen anything to suggest a clear difference in connection probability interpretation between the two simulators.
Hi @oliverdibb. Could you give a link to the original paper (or state some more details, in particular how many neurons are distributed over the grid with what distances between them)? And ideally some minimal Brian code (no need for the correct neuron model, etc.) that shows unexpected results?
At the first sight, I don’t see anything that should be interpreted differently here. I would have expected something like
The neurons are arranged in a 6 x 6 x 15 grid (432 excitatory, 108 inhibitory, 540 total), and the positions are unit free. Unfortunately, the connection probability equation you gave is exactly the equation I’m using in my model!
I’ll get working on some minimal code to reproduce the connections - thought I’d send the paper through in the meantime in case it would help.
Now the case is pretty clear. Brian2 and the manual implementation agree. Either we have the same error in the interpretation of the formulas or there’s a typo or some other missing information in the paper.
Did you find the original source code online? If not, we could ask the authors for it.
I agree with @schmitts that it looks like Brian is interpreting things correctly and that there is either a bug in their code, on some information in the paper is missing or incorrect. E.g. if I divide the probability by 100 (i.e. let p(d) denote a probability in %), then the values are of a similar magnitude as those reported in the paper (around 5–6%). I don’t think we can resolve the mismatch without having access to their code.
Thank you for your replies. I got in touch with the author and there is indeed a typo in the paper - \lambda = 0.15 was used, instead of the stated value \lambda = 0.25. Changing this corrects the erroneous mean probabilities.
I’ve also noticed that the refractory period is implemented differently to my version of the model. In theirs, the refractory period for each neuron is redrawn at each spike from a given gamma distribution, identical for every neuron in each group (for excitatory neurons, t \sim \Gamma(2, 5), for inhibitory neurons t \sim \Gamma(2, 1.5)). Initially, I implemented it as follows:
refractory = "(-log(1 - rand()) -log(1 - rand())) / rate" to draw from a gamma distribution with (shape, scale) = (2, 1/rate). This raises no errors, but testing the implementation raised some questions…
To test, I gave a NeuronGroup a variable with exactly the same string representation as refractory= so I could record it in a StateMonitor and confirm the average behaviour over the run. However, on running I get an error stating that rand() is a stateful function and hence can’t be used twice in a string. The same error was not raised when the same expression was used as the refractory parameter of the NeuronGroup. I assume though that the string implementation of the refractory period will still run into the same issue?
If this won’t work, I figured I could implement the refractory period as a variable and use a TimedArray refreshed every so often with a network operation. Does this sound along the right tracks, or am I missing something better?
This is a tricky issue and currently not handled very well. The reason why we don’t allow two rand() calls in a single expression is that we process mathematical expressions with sympy, and it does not have a concept of stateful functions. It will therefore consider -log(rand()) - log(rand()) to be equivalent to -2*log(rand()). Now, this makes sense for expressions that are part of equations, but general statements (e.g. a reset) or expressions (like a threshold, or the refractory condition) are not treated “mathematically”, otherwise statements like u = u + d wouldn’t make any sense. But unfortunately the current code generation pipeline is a bit convoluted, and we end up converting “normal” expressions with sympy in places where we probably shouldn’t.
When you set refractory to your expression, this will actually not run into problems like the -2*log(rand()) “simplification” that I mentioned earlier, but it will not do what you expect it to do. The refractory expression is verified every time step, not only when a spike occurs (which is necessary for refractoriness conditions like v > v_threshold used e.g. in HH models, but is less obvious for expressions that evaluate to a time span). Long story short, the correct way to implement it would be to calculate it as part of the reset, i.e.:
add a parameter like ref_time : second to the equations
set refractory = 'ref_time'
and add ref_time = (-log(rand() - log(rand()) ) / rate to your reset definition.
This will draw a new random time span for each spike and then use this as the refractory time.
Now, alas, this will run into the error about two rand() calls again… The workaround to deal with this is to “hide” that fact that you are using the same function twice. One way to do this is to define two new functions with different names, which both refer to the same function under the hood. This can be as easy as defining:
and then using ref_time = (-log(rand1() - log(rand2()) ) / rate in your reset.
To check whether this is doing what you think it is doing I’d do a more straightforward check: use a NeuronGroup that spikes every time step, except when it is refractory. Putting together what I said above this would be:
You can record its spike with a SpikeMonitor, run it for a while and then make a histogram of the ISIs (to get the ISIs in ms, use e.g. np.diff(spike_mon.t/ms)).