Hi again. The “extreme” solution would indeed be to do a calculation of the synaptic propagation yourself, but this would either mean to write a
network_operation (rather straight-forward, but you could no longer use C++ standalone more or Brian2GeNN) or a function implementation in C++ (considerably less straightforward). But there is one technique that is very general and should help considerably: move computation from the synapses into the (pre-synaptic) neuron. Except for extremely sparse connectivity, you will have an order of magnitude more synapses than neurons. So every computation that is done “per synapse” and “per timestep” is very costly, and should be avoided if possible. Let’s have a look at some equations (more or less copied from your earlier posts):
eqs_synapsmodel = '''
s_nmda_tot_post = w *u_f * x_d * s_nmda :1 (summed)
ds_nmda/dt = -s_nmda/(taus_nmda)+alpha_nmda*x*(1-s_nmda) : 1 (clock-driven)
dx/dt = -x/(taux_nmda) :1 (clock-driven)
dx_d/dt = (1-x_d)/tau_d :1 (clock-driven)
du_f/dt = (U-u_f)/tau_f :1 (clock-driven)
w : 1
eqs_onpre = '''
x += 1
u_f += U*(1-u_f)
x_d *= (1-u_f)
These equations are solved for every synapse individually, but you can see that since
tau_d etc. are constants, almost all of these equations will only depend on the pre-synaptic neuron, i.e. they will have exactly the same value for all synapses that are coming from the same neuron. Simulating them individually is therefore quite a waste of resources. In the above equations this means that you can move
s_nmda to the equations of the pre-synaptic neuron. The related statements in
on_pre would go into the
reset of the pre-synaptic neuron. The summed variable has to stay in the
Synapses and then needs to refer to
x_pre, etc. Now, there is one complication: all this only works if there are no synaptic delays, since a neuron’s
reset statement will update the variables immediately. There is a workaround to this limitation, at least if all synapses coming from the same neuron have the same delay: you can introduce a “fake synapse” from each neuron onto itself, which triggers the update instead of using the
reset statement. This way, you can set a delay on this synapse to have it trigger the modifications with a delay.
Finally, it might make sense to have a look at the output of
profiling_summary first (note that using the
name argument for objects will make this output more readable), to make sure you are not optimizing in the wrong place.