Delay for 'summed' variables in synapses

Hi. Unfortunately, that feature is still “on the list” and not implemented yet. We are unfortunately severely limited in developer time, so seemingly simple features like this can take a long time (in particular, if there are still syntax questions to finalize).
If you are using runtime mode, you can work around this limitation quite easily by using a network operation. The following example uses two input neurons that connect to a second population that sums the two continuous activities, but each of the three neurons uses different delays for the two input neurons:

from brian2 import *

max_delay = 60

group_1 = NeuronGroup(2, '''dx/dt = sin(2*pi*freq*t) / ms : 1
                                    freq : Hz (constant)''')
group_1.freq = '(i + 1)*100*Hz'

group_2 = NeuronGroup(3, '''y : 1''')
S = Synapses(group_1, group_2, '''x_delayed : 1
                                  y_post = x_delayed : 1 (summed)     
                                  delay_in_steps : integer (constant)''')
S.connect()  # all-to-all
S.delay_in_steps[:, 0] = [5, 10]
S.delay_in_steps[:, 1] = [30, 60]
S.delay_in_steps[:, 2] = [50, 20]

buffer = np.zeros((len(group_1), max_delay))
@network_operation()
def propagate_delayed_x():
    global buffer
    buffer[:, 0] = group_1.x[:]
    S.x_delayed = buffer[S.i[:], S.delay_in_steps[:]-1]
    buffer = np.roll(buffer, 1, axis=1)  # advance for next step

mon_1 = StateMonitor(group_1, 'x', record=True)
mon_syn = StateMonitor(S, 'x_delayed', record=True)
mon_2 = StateMonitor(group_2, 'y', record=True)

run(20*ms)
print(buffer)
fig, axs = plt.subplots(3, 3, sharex=True, sharey='row')
for idx in range(3):
    if idx < 2:
        # x in group 1
        axs[idx, 0].plot(mon_1.t/ms, mon_1.x[idx])
    else:
        axs[idx, 0].axis('off')
    # delayed version in synapses
    axs[idx, 1].plot(mon_syn.t / ms, mon_syn[S[:, idx]].x_delayed.T)
    # sum of delayed x in group 2
    axs[idx, 2].plot(mon_2.t / ms, mon_2.y[idx])
plt.show()

plot

The implementation above is not the most efficient since it copies the buffer array at every time step, you could use indices with modulo semantics instead (see e.g. the cylindrical array in Vectorized Algorithms for Spiking Neural Network Simulation | Neural Computation | MIT Press).

For C++, you’d have to implement something similar in C++ code, directly referring to the names of the arrays that Brian uses internally, see User-defined functions for some hints. Unfortunately I can’t help with this any further, since I am about to leave for a long summer break :sun_with_face: But hope the above gets you going at least!

1 Like