Set spikes inside network_operation decorator

Dear community,

I am currently facing an issue using the SpikeGenerator function set_spikes() inside a @network_operation. I initialized a SpikeGenerator with empty neuron indices and empty spike times and connected it with a simple NeuronGroup via j=“i” Synapses. I need to set spikes during the simulation, therefore I use the @network_operation. I plot the spike monitor of my NeuronGroup using brian2tools. When using set_spikes() outside of the @network_operation the desired spikes appear in the plot. When setting those spikes inside the @network_operation function, no spikes appear.

from brian2 import *
import brian2tools as btool

neurons = 16
v_rest = 0. * mV
v_reset = +10. * mV
vt = +20. * mV
membrane_time_scale = 20. * ms
abs_refractory_period = 2.0 * ms

eq = """
dv/dt = (v-v_rest) / membrane_time_scale : volt (unless refractory)
"""

supervised_gen = SpikeGeneratorGroup(neurons, array([]), array([]) * ms)
G = NeuronGroup(N=neurons, model=eq,
                threshold='v>vt', reset='v=v_reset',
                refractory=abs_refractory_period, method="linear")

ff_supervised = Synapses(supervised_gen, G, on_pre='v+={} * mV'.format(20.1))
ff_supervised.connect(j='i')


@network_operation(dt=defaultclock.dt)
def update_active(t):
    for idx in range(neurons):
        delayed_spike_time = array([t + (1 * ms)]) * ms
        supervised_gen.set_spikes(indices=array([idx]), times=delayed_spike_time)

# Set spikes without network_operation:
# supervised_gen.set_spikes(indices=array([5]), times=array([75]) * ms)
spike_mon = SpikeMonitor(G)
voltage_monitor = StateMonitor(G, 'v', record=range(neurons))

run(100 * ms)

btool.brian_plot(spike_mon)
plt.show()
btool.brian_plot(voltage_monitor)
plt.show()

I have already tried to calculate neuron indices and spike times inside the @network_operation, store them inside a global list and set all spikes outside of @network_operation. However, in my project a spike time is to be calculated depending on voltages, which in turn depend on previously set spikes. So putting the set_spikes() function outside of @network_operation contradicts my project concept.

I am really grateful for any help you can offer. Thank you in advance.

Kind regards,
Georg

Hi @georg . We should certainly make this more clear in the documentation, but when it states that " The spikes that will be generated by SpikeGeneratorGroup can be changed between runs with the set_spikes method" this is meant to imply that it can only be used between runs, not in a @network_operation. The reasons are a bit technical, but to give the general idea: at the beginning of a run, SpikeGeneratorGroup does a number of preparations, in particular it verifies and sorts the provided spike indices and times and converts them into integer time steps. This is part of the optimizations to make the SpikeGeneratorGroup reasonably fast. Without these optimizations, the SpikeGeneratorGroup would have to go through all entries at each time step, and check whether one of its neurons should spike.
If you call set_spikes in a network_operation, it will update the indices/spike times, but you’d only see that when you do another run and therefore trigger a new preparation phase. Within the same run, it will still continue to use the previously calculated values (e.g. integer time steps instead of spike times).

Could you give some more information about what you want to achieve, i.e. how your spike times should depend on other things in your network? Maybe there’s a solution that works without setting things in a network_operation. Also note that in your above code (which is just meant as an illustration, I assume), you 1) overwrite the indices/times with each loop iteration so that at the end of the network_operation, only a single neuron (the last one) is set up to spike, and 2) each network_operation overwrites the indices/times from the previous time step so that the SpikeGeneratorGroup will never spike, since the next spike time is always 1ms in the future.
If you only need to update the spike times at a low rate, it might be feasible to do multiple runs:

for _ in range(iterations):
   run(10*ms)
   supervised_gen.set_spikes(...)  # set for next round

Hi @mstimberg ,

Thank you for your quick response and the detailed answer. Since I need to simulate at least 1 min and also apply hyperparameter tuning, the iterative run(10*ms) solution takes extreme computation time.

Could you give some more information about what you want to achieve, i.e. how your spike times
should depend on other things in your network?

My program calculates a loss every 10 ms. This loss is roughly speaking the difference between experimental voltage at time t and the mean voltage of my network at time t. Since my network uses STDP, it receives spikes from a SpikeGenerator (like in the illustrative code) but with a time delay depending on the loss. In summary the network is recurrent and adapts its synaptic weights to reproduce an experimental voltage trace.

Thanks, that makes things a bit clearer. In principle, you can calculate the mean voltage of a network and its difference to some target value with Brian’s built-in methods without needing a network_operation. Something like this:

group = NeuronGroup(..., '''dv/dt = ...''', ...)
target_v = TimedArray(...)  # recorded data
loss = NeuronGroup(1, '''average_v : volt
                         loss = abs(average_v - target_v(t)) : volt''')
calc_average = Synapses(group, average, 'average_v_post = v_pre/N_pre : volt (summed)')
calc_average.connect()

The “summed” operation in the synapses will add up all the voltages and divide them by the number of neurons. The loss variable then stores the absolute difference of the mean from the target value.

Now, the more difficult part is that your model would like to change the delays. Unfortunately, changing delays during a simulation is not allowed at the moment. Some papers that model “delay learning” have used a trick that might be a possible alternative, though: each synapse is represented by several synapses with different delays. Instead of changing delays, you select a delay by setting the weights of all synapses with incorrect delays to zero. Could something like this work for you?

The suggested option with the “several synapses with different delays” is unfortunately not an option for me. Because with this I can only simulate delays on a discrete scale. However, for my STDP learning I need a continuous scale.
It turned out that the iterative run(10*ms) workaround works, but so slowly that my planned hyperparameter tuning takes way too much computation time. It looks like I have to do without the tuning.

Well, technically speaking Brian only supports discrete delays, since everything is bound to the grid of the simulation time step. It rather depends on what range of delays you need. E.g., if you want all delays in a range between 0 and 5 ms with the default simulation time step of 0.1ms, you’d need 50 separate synapses to cover all the possible delays.

Could you maybe share some code how you are doing the iteration? There might be ways to make this run faster.

This is the iterative run(x*ms) part:

taupre = 20 * ms
taupost = taupre

all_losses = []
sub_duration = 10  # in ms
steps = int(duration / sub_duration)
electrodes = 16
for run_step in range(steps):
    current_time = (run_step + 1) * sub_duration
    run(sub_duration * ms, report='text')
    generator_indices = []
    generator_spike_times = []
    for idx in range(electrodes):
        mu_ref = referenced_voltage(G, idx)  # mean electrode voltage referenced to mean voltage of all electrodes
        data = orig_data[current_time, idx]
        loss = mu_ref - data
        abs_loss = abs(loss)
        all_losses.append(abs_loss)
        delay = (taupre + taupost) * (abs_loss / np.amax(all_losses))
        delayed_spike_time = current_time + (delay / ms)  # remove unit
        generator_indices.append(idx)
        generator_spike_times.append(delayed_spike_time)

    supervised_gen.set_spikes(indices=array(generator_indices), times=array(generator_spike_times) * ms)

… with G as my NeuronGroup and supervised_gen as my SpikeGeneratorGroup. Roughly speaking, it is a loss evaluation translated into a time delay for the spike.

Yes, I increasing the value in sub_duration would lower the computation time, but this would also result in less “spike delay” evaluations (which is not wanted).

Again, thanks for your help!

Ok, this looks all reasonable in general, I don’t see how to achieve a major speed-up, I’m afraid. Each of the run adds a fixed overhead (in my test something on the order of 50ms), despite all the caching etc. that we are doing. I guess in your case you need thousands of iterations? In the long term, we plan to reorganize things in a way that would make use cases like this faster, but these are really long term plans.
There’s a small optimization you can do, but it won’t change anything fundamentally: instead of using run, construct a Network object before the loop with net = Network(collect()), and then use net.run(...) in the loop.