Synapse implement

Thank you very much for your nice simulator and kind supports for users.
May I ask questions related to Brian simulator?

As the instruction provided, we can implement synaptic model using Brian.

1, I want to ask you about the underlying motivation about differential equation of synapse.
For example, Brian use the equation for alpha synapse as below.

eqs = ‘’’
dV/dt = (x-V)/tau : 1
dx/dt = -x/tau : 1
on_pre = ‘x += w’

Is this because Brian offers to express exponential function as the other form of equation?
That is, we can not directly use the exponential function as like “exp(-t/tau)”. It must replace the other representation (that’s why we need to use simultaneous differential equation)?

2, Why we use the “x” variable to add synaptic weight instead of V(or g)?

When we think about the same equation above, and want to implement conductance base synapse, maybe the equation is the below (just replace V => g)

eqs = ‘’’
dg/dt = (x-g)/tau : 1
dx/dt = -x/tau : 1
on_pre = ‘x += w’

I am a bit confusing because we don’t say “on_pre = ‘g += w’”, while we want to add synaptic conductance?
It may be a basic question, but I want to clarify the underlying idea.

Thank you so much again for your kind supports in advance.

For the first question this post , Marcel’s asnwer may help.

For the second question, as the link you mentioned suggest, we can replace V by g_syn.
Implementation of alpha synapse already asked here.

There are also questions for implementation of conductance base synapse with continuous updating in the group.

I hope it helps.

Hi Ziaeemehr,

Thanks a lot for your comments. I read the former posts, but still a bit confusing.
Is it because we need to replace the expression of “exp(-t/tau)”?

For the second question, yes, I can understand the way to implement alpha synapse. However, I still cannot understand why we increase the value of x instead of g itself? Or to update the value of x means to update g indirectly??

I am glad if I can know the reason.

Hi @ysj (I hope @Ziaeemehr doesn’t mind me stepping in).
If I understand your first question correctly, it is about why we have to write the synapse with two differential equations instead of writing something like g = g_max* (t - t_spike)/tau * exp(-(t-t_spike)/tau), i.e. an alpha function. We could actually do this, but this formulation would only work for a single spike. What we really need is something like g = g_\mathrm{max} \sum_{t_i} \alpha(t - t_i), i.e. we need to sum over all spikes. To simulate and calculate this exactly, we would have to keep track of all spikes that ever arrived at this synapse and as the simulation progresses, this would have to sum up more and more terms, making the simulation slower and slower. Of course most of these terms are very close to zero so they do not really matter – in practice one would therefore discard spikes that have been received a long time ago. But still, this requires significanly more calculation and house keeping than using the dynamical system formulation where the value jumps up for every incoming spike and then decays according to the differential equations. Just to make clear that there is no misunderstanding: during a simulation Brian of course has to either solve or numerically approximate these equations to advance the value from one time step to the other.

Oh, and note that we have considered offering support for describing synapses as a function of time instead of with differential equations: Arbitrary spike kernels · Issue #1170 · brian-team/brian2 · GitHub The discussion did not go very far, though.

Regarding your question 2: according to the equations, x decays exponentially with time, and g decays exponentially towards g, i.e. changes in g depend on x. If you would increase g directly with each incoming spike, x would stay 0 forever and g would simply decay exponentially over time. This would be a simple exponential synapse, and you could just remove x from the equations to implement it. But if you make x jump up, the functional form of g will be an alpha function of time.

1 Like

Hi Marcel,

Thank you very much for your detailed explanation for me. You understand perfectly my question. To be honest, I didn’t know that the details of the calculation mechanism. To sum up, we need to use two differential equations to enhance the efficiency of calculation since if we use single differential equation it makes simulation slower. is it correct understanding?

For second question, now I can understand the reason, and it makes sense! Thanks a lot for your kind explanation of details to my questions.

Not exactly: it is rather we are using an “instantaneous” implementation (using discrete jumps for each incoming spike and then a continuous evolution based on differential equations) and not an implementation that uses a memory of all the past events. All this is very related to the design of filters in the signal processing world: if you look at the response of a filter (≈ synapse) to a series of pulses (≈ spikes), you could describe the response as the sum of stereotypical kernels that you “copy&paste” on top of each incoming pulse. But as a method of calculating the response at any time t, this is very inefficient because you have to take all the pulses in the past into account. For a certain type of kernels/filters (in particular exponentials, alpha functions, …), you can use a different approach by instead having the response jump up for each incoming pulse and then have it decay according to one or more differential equations, i.e. the approach we are using here.
You can find more about this in textbooks on signal processing. If you want to convert between the two representations, you’ll have to have a look at the Z transform. Here’s a paper discussing this technique in the context of synapses: Employing the Z-Transform to Optimize the Calculation of the Synaptic Conductance of NMDA and Other Synaptic Channels in Network Simulations | Neural Computation | MIT Press

Hi Marcel,

Thank you very much for explaining more details about the method used in Brian for me. I now could understand why Brian adopted the method more clearly. Thanks a lot also for the paper to know further theory. I will have a look the paper as well!

1 Like