I need to train a model with a large amount of samples, e.g. MNIST data and run it in standalone mode. Before running the model, I store the target in a TimedArray and define the input Poisson spike neuron accordingly, and procedure to the running, i.e…
ta_img = TimedArray(img_array, dt=simulation_duration)
input_neurons.run_regularly('rate =ta_img(t, i)*Hz', dt=simulation_duration)
run(self.simulation_duration*(idx_end-idx_start) #idx_end and idx_start are the indices of the first and last sample
device.build(directory=self.standalone_code_name, compile=True, run=True, debug=False)
For example, to run 5s for 10K samples, (idx_end-idx_start) = 10000 and simulation_duration = 5s.
During the running, e.g. for sample 99, if one find the output does not meet the condition (e.g. the number of spikes is too small), and want to rerun the sample 99 with a different input spike rate without stopping the entire run, is there a simple way to do it?
This is very related to my comment here: Multiple run in standalone mode - #9 by mstimberg
If you need this kind of logic, then using runtime mode is much more straightforward (you could either stop the simulation or implement the logic in a
network_operation). If you want to do it in C++ standalone mode, then the only way I can think of is via C++ code that implements the logic. Instead of
rate = ta_img(t, i)*Hz you’d use something like
rate = my_func(t, i)*Hz and implement
my_func in C++ so that it looks at the recent firing rate and decides whether to present the next stimulus or the same stimulus with a higher firing rate. This solution is not very “elegant”, though, because you’d have to access some Brian internals such as the names of arrays in the generated code. If you want to go down that road, have a look at this thread: User-defined functions
Thanks Marcel. One follow-up question. The model is run using the following command, assuming 10 samples and each sample run for 10s
If every sample run only once, then every sample will run 10s. In case one sample need to run, e.g. 5 times, then pre-allocated total run time, i.e. 10*10s, will not be enough and I will need 15*10s to go through all samples. Is there a way to automatically adjust the total run time in this case?
In standalone mode, this isn’t possible at the moment, I’m afraid. Your only solution would be to have some extra time in the end of your simulation. If it does not get used fully, you’d simply not present anything and waste some time…
In runtime mode, you could of course chain
run calls to only run it for the time needed, or you could call
stop() from a
Thanks. Let me give this a try in standalone mode.