Execution speed of the network

Ok, I see. This is in general less efficient than having a single long run call (you can do things with TimedArray and run_regularly, for example), but if your individual runs are long (as in your example), it won’t make much of a difference. This isn’t compatible with standalone mode, though (although this feature in Brian’s current development version would actually allow a variant of this).

I am a bit confused by your code, though. You do not feed the spikes in the image in as events, but rather make the neuron representing the pixel fire continuously for 30s? And most importantly, you seem to be running one simulation per event (not per image), and these “stack up”? I.e., for the first simulation it has one neuron that is active all the time, for the second simulation there are two neurons that are active, etc.? This seems wrong, no?
If you wanted to actually feed in events at certain times, you could use a SpikeGeneratorGroup.

Regarding the synapses, the problem in your code is that

S_inputlayer_hiddenlayer.W = Whid[count_1][count]

will set the value of all weights in each loop iteration. In the end, all weights will therefore be the same. You could fix this by using something like S_inputlayer_hiddenlayer.W[-1] = Whid[count_1][count] (will set the last weight only each time), but for a weight matrix, you should really use the approach described in the documentation. I am not 100% sure about the transpose in the last line, but I think the code you posted, can be replaced by:

S_inputlayer_hiddenlayer = Synapses(G_inputlayer, G_hiddenlayer, 'W : 1', on_pre='v_post += W')
S_inputlayer_hiddenlayer.connect()  # connect all-to-all
S_inputlayer_hiddenlayer.W[:] = np.array(Whid).T.flatten()  # set weights from the matrix

This should be orders of magnitude faster to set up the connections, and will also allow you to use a single Synapses object (which should make the simulation much faster as well).

Let me know whether this works for you!