How can I get the population firing rate from a spiking HH network during simulation?

Hello! As the title suggests, I am trying to get the population firing rate of a HH spiking network and use that as a modulated input somewhere else during the simulation.

To back up a little bit, I have a network of pyramidal-interneuronal (E-I) populations of HH neurons which are connected (intra- and inter-) through synapses. The network is theta-modulated and the theta oscillation is produced by Kuramoto oscillators (modeled as a NeuronGroup by following the information here: 2D neural propagation - Michael S. Clayton).

I am trying to model theta reset, which means I need a way to connect the Kuramoto oscillators to the input group of the network, while the oscillators are also driven by the output firing rate of a spiking neuron group in the network.

I know I can use a monitor during simulation to record information from the network, however, I am oblivious to how that would be possible during runtime. In my own test code, I would run the Euler or RK4 integration function and run an averaging filter on the past X spikes to get a firing rate from the spike train, but I am restricted in doing so when using brian’s run() function.

*** What I’ve Tried So Far ***
I am currently testing what happens if I create a simple single-neuron group that works as a low-pass filter; it takes spikes as inputs and I just use an RC differential equation to smooth spikes into a rate. I doubt this is the best I can do, so any suggestions are very much welcome!

Thank you in advance, happy to be part of the community!

P.S. I am (arguably) better at explaining things via images, so here’s a snip of my network:

1 Like

Hi everyone!

This is an update to the previous post, as I have tried a few more things. For the connections going towards the Kuramoto oscillators I did two things: first, I created an intermediate group that has a single neuron and acts as a low-pass filter and connected all the spiking neurons of interest via synapses to the filtering neuron and I have something (not 100% sure about this step) that looks like a rate; second, I linked the output ā€œrateā€ to the R variable of interest in the oscillators group.

The above should take care of the incoming connections. For the outgoing connections, I am struggling with something that is simple on paper, but not clear on Brian2: I need to generate a single rhythm from the oscillators, which means getting the phase of the individual oscillators and combining it into a single variable, which I will then link in the top-right E group.

Any ideas are more than welcome. If you have any suggestions on either parts (incoming-outgoing connections), kindly let me know.

Thank you again!

Hi nvar,
this doesn’t address the more specific questions you have, but I suspect Brain2’s network_operation will be useful to you to flexibly process signals in the network within a single run.

I’ve also tried using a neuron group as a real-time exponential filter for firing rate and it seemed to work pretty well:

  eqs_FiringRateFilter = '''
    dY/dt = -Y/tauFr : 1/second
    '''
 filter = NeuronGroup(1, eqs_FiringRateFilter, namespace=filter_params)
 filterSynapses = Synapses(filter_in, filter, on_pre='Y_post += 1/tauFr', namespace=filter_params)

but there may be more elegant ways to do this.

-Adam

2 Likes

For the ā€œtheta driveā€ piece, are you looking for a way to average the phase from the population? If you can get the vector of individual phases, it should be doable:
i.e.
re^{i \psi} = \frac{1}{N} \sum_{j=1}^{N} e^{i \theta_j}
\psi = \angle \frac{1}{N} \sum_{j=1}^{N} e^{i \theta_j}
(not really my area of expertise, I pieced this together from wikipedia )

2 Likes

I was writing a reply and got interrupted, only to get scooped by @adam :laughing:
He’s completely right: network_operation is your solution if there’s anything that you cannot express with equations (e.g. a somewhat artificial measure of firing rate that just looks at the last x timesteps), and using a filter this way to get a firing rate is exactly what I’d have recommended.

He’s also right about extracting average phase (and overall phase coherence, if needed): you can do this by interpreting each oscillators phase as a vector, and summing up these vectors to get the average. We do not support complex numbers in Brian, but instead you can use the cartesian representation with x and y coordinates. After summing/averaging these vectors, you can then transform it back to an angle with arctan (it would be a bit easier with arctan2, but we don’t support this at the moment – we should, though!). See below for an example showing this approach in practice. Let me know if there’s anything unclear.

from brian2 import *

oscillators = NeuronGroup(5, '''
omega : Hz (constant)
dtheta/dt = omega : 1''')
# Random frequency, random initial phase
oscillators.omega = '100*Hz + rand()*200*Hz'
oscillators.theta = 'rand()*2*pi'

population = NeuronGroup(1, '''
x : 1
y : 1
coherence = sqrt(x**2 + y**2) : 1
phase = arctan(y/x) + int(x<0 and y>0)*pi - int(x<0 and y<0)*pi: 1 
''')
population.x = 1e-9  # avoid division by zero

average = Synapses(oscillators, population,
                   '''
                   x_post = cos(theta_pre)/N_incoming : 1 (summed)
                   y_post = sin(theta_pre)/N_incoming : 1 (summed)
                   ''')
average.connect()

mon_osc = StateMonitor(oscillators, 'theta', True)
mon_avg = StateMonitor(population, ['coherence', 'phase'], record=True)

run(100*ms)

fig, axs = plt.subplots(3, 1, sharex=True)
axs[0].plot((mon_osc.theta.T + pi) % (2*pi) - pi)  # Transform to [-π, π]
axs[0].set_ylabel('oscillator phase')
axs[1].plot(mon_avg.coherence[0])
axs[1].set_ylabel('phase coherence')
axs[2].plot(mon_avg.phase[0])
axs[2].set_ylabel('average phase')
axs[2].set_xlabel('time (ms)')
plt.show()

coherence

3 Likes

Wow thank you both for your input! I have exactly the same setup as @adam (well, the variable names are different, of course), but knowing that someone more experienced has used this already puts me a bit at ease.

@mstimberg I understand your attached code, thank you for sharing! There are a few pieces here and there that I have not seen before (i.e. the (summed) keyword in the equation definitions), but overall it looks much nicer (and neater) than my own implementation.

As for the [quote=ā€œmstimberg, post:5, topic:496ā€] network_operation [/quote] ā€œtrickā€, I found out about it just today. My version is very hacky, but it works so far and I can leave it here for future reference (or for laughs).

from brian2 import *

# Kuramoto oscillators
kuramoto_eqs_stim = '''
    dTheta/dt = ((omega + (kN * PIF) - I_stim*X*sin(Theta)) * second**-1) : 1
    PIF = .5 * (sin(ThetaPreInput - Theta)) : 1
    Vm = sin(Theta)*mV : volt
    ThetaPreInput : 1
    omega : 1
    kN : 1
    I_stim : amp
    X = pulse_train(t) : amp**-1
'''

# parameters
duration = 5*second
defaultclock.dt = 1*ms

# Inputs setup
dt_stim = 1*ms
I0 = 10*amp
tv = linspace(0, duration/second, int(duration/(dt_stim))+1)
xstim = 1. * logical_and(tv>3, tv<3.1)
pulse_train = TimedArray(xstim*amp**-1, dt=dt_stim)

# Oscillators
seed(42)
N = 50
f0 = 4 # center freq [Hz]
sigma = 0.5 # normal std

net_kur = Network()
G = NeuronGroup(N, kuramoto_eqs_stim, threshold='True', method='euler', name='Kuramoto_N_%d' %N)
G.Theta = '2*pi*rand()' # uniform U~[0,2Ļ€]
G.omega = '2*pi*(f0+sigma*randn())' # normal N~(f0,σ)
G.kN = 10
G.I_stim = I0

# hacks start here
G.namespace['order_param'] = zeros(int(duration/defaultclock.dt), dtype='complex')
G.namespace['cnt'] = 0

# synapses
S = Synapses(G, G, on_pre = '''ThetaPreInput_post = Theta_pre''', method='euler')
S.connect(condition='i!=j')

# monitors
M = StateMonitor(G, ['Theta'], record=True)

@network_operation(dt=1*ms)
def update_active():
    G.namespace['order_param'][G.namespace['cnt']] = 1/G.N * sum(exp(1j*G.Theta)) # order parameter r(t)
    G.namespace['cnt'] += 1
    
# add the above to the network
net_kur.add(G)
net_kur.add(S)
net_kur.add(M)
net_kur.add(update_active)


# run the simulation
net_kur.run(duration, report='text', report_period=10*second, profile=True)
print("Simulation done")

# plot the results
fig, axs = subplots(2,1)
axs[0].plot(M.t/second, mean(sin(M.Theta), axis=0))
axs[0].plot(M.t/second, sin(imag(G.namespace['order_param'])), '--')
axs[1].plot(M.t/second, abs(G.namespace['order_param']), '--', label='N=%d'%N)

# labels
axs[0].set_ylabel("Ensemble Theta Rhythm")
axs[1].set_ylabel("Kuramoto Order Parameter")
axs[1].set_xlabel("Time [s]")
axs[0].set_ylim([-1,1])
axs[1].set_ylim([0,1])
axs[0].axvline(x=3, ymin=-1, ymax=1, c="red", linewidth=2, zorder=0, clip_on=False)
axs[1].axvline(x=3, ymin=-1, ymax=1, c="red", linewidth=2, zorder=0, clip_on=True)

# make things pretty
legend()
grid()

# show
show()

I went around the imaginary number restrictions by defining the vector of interest as a numpy array that holds complex numbers as its data type. The vector is kept in the group’s namespace, which if I’m not mistaken is leveraged to the run() workspace at simulation time. It’s a very crude way to do things, but it produces an interesting result, shown below.

Thank you both again for your time and detailed explanations! Your advice was extremely useful, and I learned a few new tricks!

1 Like

Hi @nvar, glad to hear that my example was helpful, and thanks a lot for sharing your code!
There’s nothing wrong with your example (from a cursory look at least). It can also be useful to first use a network_operation approach to test things out, and then to convert it into a solution based on equations. The advantage of not using network_operation is that 1) it will usually run faster and 2) you can make use of the C++ standalone mode which speeds things up even further, in particular for small networks.
Also note that using the namespace in that way is not exactly the way it is intended to be used: the namespace is meant to be used for external, scalar-valued constants (and references to functions). In principle, the code generation machinery could e.g. decide to hardcode the value of cnt into the generated code (since it is not supposed to change during the run), so changes to it during the run would be ignored. If you want to store a single, changing value in a NeuronGroup, you could use cnt : integer (shared) as part of the equations definition.
For order_param, things work in your case, but only because you never use the order_param variable anywhere from within the model. If you’d try to refer to it from equations, etc., things would fail. But since you are only using it in the network_operation, you don’t have to put it into the namespace in the first place. Just doing something along the lines of

order_param = zeros(int(duration/defaultclock.dt), dtype='complex')
# ...
@network_operation(dt=1*ms)
def update_active():
    order_param[...] = ...

would work as well.
Final remark: If your pulse train is only a single pulse, you could also directly write:

'''
...
X = int(t>3*ms and t<3.1*ms)*amp**-1 : amp**-1
'''

without doing the roundtrip over the TimedArray.

1 Like

Thank you for the information, I had no idea about the C++ standalone mode (and I am still trying to make it work). However, there is something that I do not understand completely. You mentioned that the order_param variable would cause issues if I were to use it anywhere in the model i.e. equations. Why would that fail and in which way? Does this mean that it is not possible to make something like a cyclic-buffer which acts as a moving average filter and use it during code runtime?

In a sense, I assumed that namespaces work similarly to C++, and whatever is in the ā€œrunā€ namespace would be accessible by every group/object. In that way, I figured that accessing the order_param variable would be impossible due to the different namespaces, correct? If so, would it be possible to circumbent this by forcing a variable in the ā€œrunā€ namespace?

All of the above comes from the fact that I was initially trying to create a moving average filter that translates spikes into rates so I can use the rates as an input to the oscillators during simulation. Obviously my approach has a number of flaws, and I have no idea how to access past spikes in brian2. In Python I would use a cyclic buffer that holds N samples to calculate that average while the simulation runs. I thought I’d implement the filter using a network operation but given your previous reply regarding namespaces, I am unsure if it would be possible. Any advice is welcome.

Regarding your secondary remark, you are absolutely right; my code from above was a simplification of a large-scale spike train, and I didn’t go into the cleanup process but rather imported it as-is.

Thank you again for the support and the help!

Hi again. The namespace in Brian is used to resolve anything that is referred to in equations (and other statements like resets, on_pre, …), but is not a variable of the model, i.e. not defined within the equations itself. These external values are always expected to be constant over the run, and to be scalar values. Within ā€œBrian codeā€ (e.g. equations), there is no syntax to access arrays, e.g. you are not allowed to write variable[...]. Of course internally a lot of things are stored as arrays, but all the indexing is handled internally. This is not so much a constraint in general, since for most things were you need arrays these store either one value per neuron (and can therefore be stored as a parameter in a NeuronGroup), or one value per synapse (and can therefore be stored in Synapses). For things like ā€œmemory of the pastā€ (like your average recent activity), one would usually use some kind of continuous trace variable like the exponential filter we discussed earlier.
Now, some kind of buffer data structure would be handy sometimes, and we would like to support it (see Introduce a 2d array data structure for buffering Ā· Issue #467 Ā· brian-team/brian2 Ā· GitHub). Unfortunately, at the moment this is not possible using the built-in machinery.
Whenever you run into fundamental limitations of Brian’s code generation framework, you have two options: 1) use a network_operation and write a Python function that does what you’d like to do (you wouldn’t have to write a buffer data structure yourself necessarily, you could also look up the most recent values in a SpikeMonitor, for example) or 2) write a user-defined function in the target language that provides the functionality you are interested in.

Option 1 is the easiest and straightforward, but not compatible with the C++ standalone mode. Option 2 is more complex (and not super-well documented), but it can be compatible with standalone mode if you know how to write C++ code and take care to use the array names that are used internally in Brian’s generated code. You can find some hints in this discussion: User-defined functions

So long story short, you can calculate an average of the spiking activity in the last N time steps with an approach that is compatible with C++ standalone mode, but it is quite a bit of work. My recommendation would be to rather use a continuous activity trace like the one we discussed earlier, which is arguably also more ā€œbiologically realisticā€.

2 Likes

Thank you for taking the time to elaborate more in these topics. It makes much more sense now.

I will take your advice and work with what I have right now. Thank you again for taking all this time to explain these topics!

1 Like