# Description of problem

Hello all,
I am new in both of the SNNs and Brian2 simulator and even in the neuroscience knowledge.
Actually, I am working now on an implementation of electrical (artificial) Morris-Lecar model and binary STDP mechanism in order to address a simple CV task which can be hopefully solved with our analog neurons.
My input data are recorded using DVS spiking camera as videos of shape (#frames=1500,h=240,w=180), each cell contains the number of spikes acquired by the camera at a given location and within a given time frame, but I downsampled and reduced it into (100,8,6) for “ram space” reasons and normalized between 0 and 1.
I need my input for the analog neurons to be voltage and this voltage will be translated to current, and when present an excitation I need the input spikes from this excitation to be continuous inputs instead of individual spikes events and need to have some width which means that the duration of this excitation is enough to increase the membrane potential and thus reach the switching threshold.

I have read several discussions here and it was rich and useful to avoid lot of mistakes, I tried to flatten my downsampled array into array of shape (100,6*8) and feed my neurons with my inputs using 2D TimedArray but in fact it is not what I have to do. Maybe I have to use rate coding or temporal coding methods!! but I didn’t know how!! Should I use SpikeGeneratorGroup with the original data recorded by the camera before accumulating the spikes together in the number of them?? Do you have any ideas or suggestions to do?

# Minimal code to reproduce problem

‘’’

### downsampling, reshape and reduce

d_data = np.maximum(data[:, :h:30, :ww:30], data[:, :h:30, :ww:30])
print(“original shape:”, data.shape)
print(“Downsampled shape:”, d_data.shape)
original shape: (1500, 180, 240)
Downsampled shape: (1500, 6, 8)
original shape: (1500, 180, 240)
Downsampled shape: (1500, 6, 8)

### normalize

d_data = d_data.astype(float)
nd_data = d_data / 255.0

# Reshapes the data to a 2D array where each row represents a frame to use in the TimedArray

input_data = nd_data.reshape(nd_data.shape[0], -1)
print(“input data shape is:”, input_data.shape)
input data shape is: (1500, 48)

### convert the input data into a TimedArray function which has format that can be used as input current in the neuron model

input_array = TimedArray(input_data * mV, dt = 1 * ms)
num_neurons = input_data.shape[1] # equal to the number of pixels in each frame
print(num_neurons)

### Morris-Lecar neuron equation from electrical model

dV/dt = Ig/Cm : volt
Ig = It + INa - IK + Isyn: amp
… equations
It = equation includes Vg : amp
Vg = Vd - input_array(t, i) : volt

### monitors

defaultclock.dt = 0.001ms
run(40
ms, report = ‘text’)
‘’’

Thank you for helping and sorry if my question seems to be not very clear!

Hi @WajdAli

It is hard to understand your problem without a minimal working example, but I think a timed array should indeed work. I prefer using SpikeGeneratorGroup because it requires less preprocessing e.g. if you can get your event data in the format (timestamp, x, y, polarity), you already have everything you need; you just need to decide what to do with polarity and map the 2D input into 1D array of indices.

I also recommend using software from DV or prophesee when you’re processing event data like that (downsampling, accumulation, etc). Doing it manually is a nice exercise but most of the things you need are already implemented there.

Finally, note that your dt is in \mu s! That will make your simulation slower and I’m not sure you need that much temporal resolution, so downsampling your data is a good idea. Then you can increase your dt a little bit.

1 Like