Excitatory Post Synaptic Membrane Potentials

Description of problem

I am new to Brian 2 and python in general so this may be very basic. I obtained a list of synaptic weights based on the time gaps between pre and post-synaptic firings from the STDP tutorial in the documentation. I am trying to apply each synaptic weight value obtained from that list to a separate synapse and output the post synaptic membrane potentials after some duration. After every firing, the post synaptic membrane potential should increase by the weight provided in the list at the iteration the loop is currently on. Then I want to add the final membrane potential to a list, and then iterate through the synaptic weights list until all values have been simulated through a separate neuron.

Minimal code to reproduce problem

Here’s what I have tried so far, there is probably a better way to go about it but I don’t really know what I’m doing yet.

delta_w_values=[0.00082085,  0.00086726,  0.00091173,  0.00095847,  0.00100761,
 #Obtained from a STDP simulation
Final_Membrane_Potentials = []

for del_w in delta_w_values:
    P = NeuronGroup(1, 'dv/dt=(1-v)/(tau) : 1', method='exact', threshold='v > -55', reset='v = -70', refractory=2 * ms)
    Q = NeuronGroup(1, 'dv/dt=(1-v)/(tau) : 1', method='exact', refractory=2 * ms)
    P.v = -70  # Initializing neuron P voltage
    Q.v = -70  # Initializing neuron Q voltage
    S = Synapses(P, Q, on_pre='v += del_w', delay=2 * ms)
    M = StateMonitor(Q, 'v', record=True)
    run(1000 * ms)


This is the result:

[array([-70. , -69.2935382 , -68.5941058 , …, 1.00324561,
1.00321331, 1.00318134]), array([-70. , -69.2935382 , -68.5941058 , …, 1.00342911,
1.00339499, 1.00336121]), array([-70. , -69.2935382 , -68.5941058 , …, 1.00360494,
1.00356907, 1.00353356]), array([-70. , -69.2935382 , -68.5941058 , …, 1.00378975,
1.00375204, 1.00371471]), array([-70. , -69.2935382 , -68.5941058 , …, 1.00398405,
1.00394441, 1.00390516])]

Why do I get this instead of just one final membrane potential for each delta w value, so five values?

Thanks in advance for the help!

Hi @MasonBruce. Could you give some more details on what is your aim, i.e. what you want to look at? There are a few issues with your model, and I am not sure that the value that you want to look at is very meaningful with this model.

Regarding this specific question: when you ask for M.v[-1], you are asking for “all recorded values for the last neuron that you recorded from”. The confusing thing here is that you are only recording from a single neuron in the first place, but the monitor still records a 2-D matrix (since in general you can record from many neurons). In your case, you’d need to access M.v[0, -1] or M.v[0][-1], i.e. the “last value recorded for the first (and only) neuron”. That said, if you are only interested in the last value, then you don’t need a StateMonitor. You can get the final value at the end of the simulation via the NeuronGroup’s v attribute: Q.v[0].

Regarding your neuron models: there seems to be a mix here between an abstract neuron model where v goes towards the value 1 (according to your equations), and initial values like -70 and -55 for your initialization and threshold, which seem to refer to “realistic” values in millivolt. You should use either one or the other, i.e.

  1. either an abstract value where e.g. “0” means the resting potential of the cell, “1” is the threshold of the cell, and a constant current drives the membrane potential towards a super-threshold value, say, “2”:
P = NeuronGroup(1, 'dv/dt = (2 -v)/tau : 1 ', treshold='v > 1', reset='v = 0')
  1. or a model that uses biophysical quantities, e.g.
E_L = -70*mV   # leak and reset potential
V_th = -55*mV  # threshold
I_const = 30*mV  # constant input current
P = NeuronGroup(1, 'dv/dt = (E_L - v + I_const)/tau : volt', threshold='v > V_th', reset='v = E_L')
P.v = E_L

Note that the two formulations describe the same model, it’s just a variable change. You can convert between the two descriptions v_{a} (abstract model in 1) and v_b (“biological” model in 2) via
v_a = (v_b - E_L)/(V_{th} - E_L)
(i.e. v_a is shifted so that 0 refers to the leak/resting potential and rescaled so that 1 refers to the distance between rest and threshold).

Another minor remark: your second neuron (Q) doesn’t spike since it does not have a threshold definition – declaring a refractory period therefore does not have any effect.

Thank you for help!

" That said, if you are only interested in the last value , then you don’t need a StateMonitor . You can get the final value at the end of the simulation via the NeuronGroup ’s v attribute: Q.v[0] ."

Yes, for this first model I am only interested in the last value, so thank you for explaining that Q.v[0] would work more simply.

Thanks for clarifying difference between abstract and biophysical models, I will probably implement both separately in the future to see which one is more suitable to my project.

" Another minor remark: your second neuron (Q ) doesn’t spike since it does not have a threshold definition – declaring a refractory period therefore does not have any effect."

And regarding the fact that the neuron Q didn’t have a threshold nor refractory, I thought this might be a problem but I was trying to avoid Q being reset back to -70 mV after every fire so that I could record the membrane potential’s accumulation of synaptic weights.

For context, I am an undergraduate student and I received a research grant from my school to test a hypothesis I have regarding binaural beats and improvements in memory, attention, and learning in the Brian 2 platform. My hypothesis is that the underlying mechanism for why binaural beat stimulation seems to improve performance on learning and memory assessments is, at least partially, because it stimulates the downstream pre-synaptic neuron to fire and trigger the post-synaptic neuron with a time gap between them commensurate to the frequency of the stimulation, and this promotes spiked-timing dependent plasticity. For example, in my model at least, the binaural beats applied at 50 hz would cause the action potential of Neuron A to trigger an action potential in Neuron B at a rate of 50 hz (50 times per second, one pre-then-post firing every 20 ms.) I thought this significant because a time gap between pre and post synaptic firings of 20 ms (in the example) would increase the excitatory post synaptic potential (EPSP) based on the spiked timing dependent plasticity graph from the tutorial. So, the fast pre-before-post stimulation would initiate STDP through increased EPSP (measured in volts) and long-term potentiation, leading to better memory. Based on the graph, shorter time gaps would increase synaptic weights (w) more than longer gaps, and post-before-pre would decrease the weights and EPSP. My goal is to illustrate my idea with a simulation here, and then spend the next few months designing other simulations aimed at falsifying my hypothesis. What do you think would be the best way to approach this would be? I’ve been going through the Brian 2 documentation for a couple weeks and realized I couldn’t figure out how to execute my idea without always getting errors, so I came here. My above code aimed at using the synaptic weights array obtained from the STDP tutorial and applying them to the post-synaptic neuron to obtain a final accumulation of synaptic weights upon repeated firings at different time gaps (equivalent to their frequencies applied). Is there any easier way to get the neurons to fire at a desired rate (50 hz, 30 hz, etc.), and apply the corresponding synaptic weight change per fire from the STDP graph, and then obtain the final accumulation of synaptic weights added? Or is there another way to measure excitatory post synaptic membrane potential that I don’t know about? If I used a Poisson neuron and included the desired frequency, would Brian 2 understand that I want it to fire pre-to-post at that rate, and I could obtain the weights somehow? For this simulation, I merely want to illustrate my idea, then later I will work on running more realistic simulations aimed at disproving it.

Thanks again for your help!

Hi again, and sorry for the somewhat late reply. I think I get parts of what you want to do, but others are not yet clear to me. In particular, I am a bit confused about the “accumulation of synaptic weights”. Let’s for a moment assume that the pre-post neurons regularly fire with a certain pre–post gap that increases the weight. If you don’t let the post-synaptic neuron fire due to this synaptic input, then what do you need a simulation for? If all the gaps are identical, and each of the gaps increases the weight by a fixed value, then the total weight change would be number of gaps × weight change (for an additive STPD rule, as the one in the tutorial). Your current code does not quite measure the EPSP either, it rather measures the membrane potential at some time after many spikes. For a plasticity experiment, what you’d rather do would be to do whatever protocol induces your plasticity and then have a single spike at a fixed time to measure the EPSP. But as I said earlier, I don’t quite see how this would be an interesting quantity to measure here.

I’d be happy to help you with more specific details of the simulation, but maybe a few general building blocks to get you started:

  • You can create artificial neurons that spike with a specific rate, by using dx/dt = rate : 1 and threshold='x>1', reset='x=0' as the model (this is mostly equivalent to a more realistic neuron model with a constant input current, but easier to tweak to get the firing rate you want)
  • instead of looping over situations, it is much more efficient to simulate many different situations at once
  • do have a simple measure of synaptic changes without having the post-synaptic neuron be influenced by the synaptic input, you can simply remove the effect of the synapse on the post-synaptic neuron, but otherwise keep the synaptic plasticity model.

Here’s a basic simulation like that that has pre-post pairs that fire at 200 / 250 Hz, 200/220 Hz, 200/210 Hz (i.e. with beats of 50Hz, 20Hz, 10Hz), and use the STDP rule from the tutorial to change the weights (but the synapses don’t actually affect the post-synaptic neurons). Pre (blue) and post (orange) spikes are plotted on the left, weight evolution is plotted on the right:

from brian2 import *

neurons = NeuronGroup(6, '''dx/dt = rate : 1
                                   rate : 1/second (constant)
                                ''', threshold='x>1', reset='x=0', method='euler')
pre = neurons[:3]
post = neurons[3:]
pre.rate = 200/second
post.rate = [250, 220, 210]/second

taupre = taupost = 20*ms
gmax = 1  # weights are allowed to vary between 0 and 1
dApre = .02
dApost = -dApre * taupre / taupost * 1.05
synapses = Synapses(pre, post,'''w : 1
                           dApre/dt = -Apre / taupre : 1 (event-driven)
                           dApost/dt = -Apost / taupost : 1 (event-driven)''',
                        on_pre = '''Apre += dApre
                                    w = clip(w + Apost, 0, gmax)''',
                        on_post = '''Apost += dApost
                                     w = clip(w + Apre, 0, gmax)''')
synapses.connect(j='i')  # one-to-one connectivity
synapses.w = 0.5

pre_spikes = SpikeMonitor(pre)
post_spikes = SpikeMonitor(post)
weight_mon = StateMonitor(synapses, 'w', record=True)


fig, axs = plt.subplots(3, 2, sharex=True, figsize=(9, 3), layout='constrained')
for idx in range(3):
    axs[idx, 0].plot(pre_spikes.t[pre_spikes.i == idx]/ms, np.zeros(pre_spikes.count[idx]), '|', color='C0')
    axs[idx, 0].plot(post_spikes.t[post_spikes.i == idx]/ms, np.ones(post_spikes.count[idx]), '|',color='C1')
    axs[idx, 0].set_yticks([])
    axs[idx, 0].spines[['left', 'right', 'top']].set_visible(False)
    axs[idx, 1].plot(weight_mon.t/ms, weight_mon.w[idx])
    axs[idx, 1].set_ylim(0, 1)
axs[-1, 0].set_xlabel('Time (ms)')
axs[-1, 1].set_xlabel('Time (ms)')


Not the most meaningful simulation/plot, but hopefully it gets you going :slight_smile: