Fitting multi-population models to experimental firing rates

Hi everyone :blush:,

I wasn’t exactly sure which topic to place this question in, but I am curious to know if someone has used any optimization techniques in the following context or has good recommendations in terms of libraries or any tools available that could be employed during this process.

I have a cortical circuit composed of multiple populations of neurons which are recurrently connected. The connections between different populations, as well as the parameters of the populations themselves are based on experimental data. In addition, each population receives background input in the form of a Poisson Group connected 1-to-1 (i.e. the Poisson Group has the same number of neurons as the population to which it connects), with a particular firing rate (in Hz) - the firing rate is distinct for each population. In this case, depending on the firing rate and/or synaptic strength of the background activity, I can obtain a specific spontaneous firing regime across my circuit. For each population in my circuit, I have a target spontaneous mean firing rate based on experimental recordings. Ideally, I would like to fit the background activity (firing rate of the Poisson Groups) in a way that results in the circuit having experimentally valid mean firing rates in spontaneous conditions. I have done this in the past manually, as I had a good understanding of how the populations interact and what the effect of changing one of the background firing rates means for the overall network. However, it seems to me that this is not a sustainable way to work in general and I am curious if anyone has used any tools to perform such an optimization procedure in the context of (large) network models formed of multiple populations (e.g. LIF neurons).

When reading papers describing multi-population networks, in my experience, the authors usually scrap the exact information on the technique they used to optimize the background activity under a phrase such as ‘we optimized the background activity/weights…’, without publishing or going into too much detail on the exact method used. Perhaps there are papers describing this in detail or making code available for such procedures, it’s just that I haven’t really found something useful yet.

I am aware that I could implement a custom optimization algorithm from scratch and use it to find the right set of parameters, however, this seems like a much often encountered problem in computational neuroscience, so I tend to think that there should be established libraries or pipelines that are being employed to solve it. If anyone has used or can suggest any tools that have been or could be applied in this context, I’d really appreciate to find out more about them or any relevant information. I also believe that aggregating a list of such tools could be very useful for the community, as I encountered other researchers dealing with similar problems.

Many thanks in advance.

Best wishes,
Rares

2 Likes

Hi @raresdorcioman . Very good question (and I think this is the right category for it :blush: ). I am aware of a number of tools to do parameter fitting in neuroscience, but these are usually oriented towards the fitting of (biologically detailed) single cell models (e.g. GitHub - BlueBrain/BluePyOpt: Blue Brain Python Optimisation Library, or, well, https://brian2modelfitting.readthedocs.org). I am not aware of any tool specifically focussing on fitting the parameters of network models. Incidentally, this is an area that @sanjayankur31 recently mentioned to discuss as part of the OCNS Software Working Group (Home | INCF/OCNS Software WG)!

I’d be happy to be corrected by others, but I think currently the best approach is to use a general-purpose algorithm like differential evolution (e.g. from scipy) and wrap your simulation into a little function. Something along the lines of

def evaluate_network(background_rates, target_rates):
   # Set up your network with the given background rates
   # ...
   # Simulate the network
   # ...
   # Calculate the mean rates for each population
   mean_rates = ...
   # Calculate the error, here as the mean squared error between desired and actual rates
   return np.mean((mean_rates - target_rates)**2)

You can then plug in this function into a general-purpose optimizer, e.g. the scipy one I linked earlier:

from scipy.optimize import differential_evolution
bounds = [(0, 50)] * n_populations # list with (min, max) for each background rate
best_parameters = differential_evolution(evaluate_network, bounds,
                                         args=(target_rates,))

Of course there are many options to tweak the algorithm, but this should give you the basic idea. The way I’ve written the simulation function above would not be compatible with Brian’s C++ standalone mode, and might therefore spend more time on the simulation than necessary – but whether this is the case also depends on the size of your network. In the same vein, this way of writing things could also need to unnecessary re-compilations, but whether this matters depends on how long each simulation takes. Happy to give more advice on these questions, but that might rather be a question for the Support category :stuck_out_tongue:

2 Likes

Hi @mstimberg,

Thanks a lot for your suggestion. Indeed, for single neuron models I managed to use brian2modelfitting which worked quite well actually :slight_smile: Glad to hear fitting network models is an ongoing discussion.

I will try to implement a general purpose algorithm in this case and see how it goes, there are a total of 17 parameters that need to be fit (each simulation takes about 5-10mins) so it will probably take a while, but it also seems to be that this is the best option for now. Thank you for the starting point on this. I will most likely post something in the Support category when things start to break :sweat_smile:

Best wishes,
Rares

Hi @raresdorcioman ,
For many years, I’ve been using inspyred library, which offers a good set of evolutional optimization algorithms. I have been using the genetic algorithm with non-dominant selection (NSGA2), especially. It works pretty well for a few fitness functions (say, 10 or less). However, an increase in fitness dimensions makes NSGA2 useless, mostly because it builds Pareto front and suffers from “the curse of dimensionality”, i.e. you need too many tests to fill the front. Therefore for a large feetness space, Genetic algorithm with Krayzman’s adaptive multiobjective optimization (KAMOGA) is a better choice. KAMOGA is implemented as extension to inspyred.

1 Like