What is generally considered during simulation?

Hello ,
I would ask about general protocol of simulation on computational neuroscience to verifying biological experiments.

Especially, what should I consider on simulating some population model to compare between model and real world? It is okay to recommend books or papers that presents a well-performed simulation based on clear assumptions.

Sorry for unformalized queation, I would appreciate it if you could tell me what is generally considered during simulation to narrow the gap between simulation and the real world.

Thank you so much.

Hi @Bolloknoon. What makes a good model and how one can judge the fit between a model and the real world is indeed a very general and tricky question. I donā€™t think researchers agree on a single answer.

Personally, I have become weary of the approach that tries to put all the available (anatomical, physiological, ā€¦) data into a model, and then show/verify that it matches biology, e.g. as in the core of the Blue Brain and Human Brain projects. As a somewhat representative example, see the paper by Potjans & Diesmann (2014): this is definitely solid modelling work, based on an impressive and exhaustive search for biological connection parameters in the literature. If you look at the comparison to biological results, however, it is somewhat underwhelming; the main result is that spontaneous firing rates in the different layers are similar. Again, I donā€™t want to criticize this particular paper, it is simply very hard to get your model to reproduce more than very general statistics (say, mean firing rates, firing rate fluctuations, correlations between neural activity).
I personally find it more interesting to create either models that try to answer a specific question, e.g. explain some aspect of biology, without necessarily being ā€œrealisticā€ in all other aspects, or to study very simple systems with a well-defined role; as e.g. Eve Marderā€™s work (one of her review papers) shows, you can study and model a small pattern-generating circuit for decades, and still not run out of things to explore.

But given that I work mostly work on the tools for modelling and not so much on modelling itself nowadays, maybe my opinion shouldnā€™t count that much :blush: Interested to hear what others think!

PS: I moved your question to the #science section, since it isnā€™t really a specific question about Brian itself ā€“ we prefer the #support category for questions that have a definitive answer.

5 Likes

May I give you a bit different perspective on the subject, which is almost completely opposite of @mstimberg feeling and the whole fieldā€™s tendency? Many of my colleagues will strongly disagree with my opinion below, but ā€¦ we donā€™t have a dislike button in this friendly and democratic forum :wink:.

For more than 20 years in modeling neurons, networks, and brain dynamics, I gradually moved from very formal and phenomenological models to very detailed ones with lots of channels, calcium dynamics, and sometimes modeling internal neuron machinery like diffusion in the endoplasmic reticulum, calcium-induced-calcium-release and so on. Why? Because I am sick of guessing and then browsing biological papers hoping to find any justification for my guess. In general, it looked like that: if I found something which wasnā€™t in favor of my choice - I ignored it. If I could find something to justify my choice, I showed that and called my model biologically plausible. :face_vomiting:

You may be lucky, and your guess may yield a valid prediction, like in a couple of my papers. Still, in the back of my mind, there is always a deep doubt: ā€œthis model is too simple to fully captured real processes, and you actually donā€™t know how strongly you have fooled yourself.ā€

Well, the philosophy here is quite simple: neuroscience is the first since where we know for __sure__ that the complexity of the studied object is EQUAL of the complexity of a researcher! We study the brain by our brains!
So if someone says: ā€œAha, I know how the brain works!ā€, this person should have much-much-much bigger brain to accumulate all data and integrate it into a single coherent theory. I think a standard SfN meeting proves the impossibility of this pretty well when in one corner, you can find a model of, say, the visual cortex and, in the other, a model of another part of the brain, and both of them ignores coexistence of the other parts and the fact that all components work together to solve a problem of processing information and act in the environment. Let me cite my PhD adviser and amazing experimentalist Lubove Porladchikova, who commented on my proposal of some ā€œphenomena which we should find in cortical networksā€:

"Of course, you can find it! But you will probably find something opposite too, because you can find there whatever you can imagineā€¦ " ā€¦ the complexity is bigger than our imagination. :exploding_head:

Although I quite often do rate models, I consider them no more than ā€œback-of-the-envelope calculationsā€. I come to the conclusion that simplification is my main enemy which staying between me and the real brain. So if I cannot accumulate all data in my brain, why not accumulate it in computational models? Citing Bert Sakmann:

I have all this data ā€“ cell types, ļ¬ring properties, connectivity, dendritic excitability, synaptic dynamics, ā€¦ But I donā€™t understand it. I need to model it.

My overall personal goal is to look at information processing through the (lens) of biological neurons and networks . Detailed biophysical models allow us to replicate this (lens) with great accuracy, and, based on a replica, study how a specific network performs as an information machine. It is hard, but sometimes it yields pretty fruitful results. Here is a recent example.

4 Likes