Classification based on liquid state machine and ReSuMe learning algorithm

Hello Brian2 users,

My topic is a classification task based on spiking neural network. My current idea is to build a simple liquid state machine first and combine it with the ReSuMe learning algorithm. My network structure can be divided into input layer, liquid layer, output layer and teacher layer, where the teacher layer is to produce the desired output time series. But I’m having trouble creating a synapse from the liquid layer to the output layer using the ReSuMe algorithm.

S_liquid_to_output = Synapses(neurons_liquid, neurons_output,
‘’‘w : 1
dApre/dt = -Apre / taupre : 1 (event-driven)
dApost/dt = -Apost / taupost : 1 (event-driven)’‘’,
on_pre=‘’‘ge += w
Apre += dApre
w = clip(w + Apost, 0, gmax)’‘’,
on_post=‘’‘Apost += dApost
w = clip(w + Apre, 0, gmax)’‘’)

This is a simple synaptic implemention of STDP.

Is there anyone with experience in this field who can answer my question? Thank you so much!!!

I have built a complete liquid state machine network and built a function that uses the ReSuMe rule to train the synaptic weights from the liquid layer to the output layer. The detailed code of the function is as follows:

Function to compute the exponential decay kernel Wd for the ReSuMe learning rule

def resume_kernel_d(sd):
Ad = 0.1 # Learning rate constant
taud = 20 # Learning time constant
if sd > 0:
return Ad * np.exp(-sd / taud)
else:
return 0

Function to compute the exponential decay kernel Wl for the ReSuMe learning rule

def resume_kernel_l(sl):
Al = 0.1 # Learning rate constant
taul = 20 # Learning time constant
if sl > 0:
return -Al * np.exp(-sl / taul)
else:
return 0

Function to update synaptic weights from liquid layer to output layer using the ReSuMe learning rule

def resume_output_weights():
ad = 0.0001 # non-Hebbian weight term(desired spike trains)
al = 0.0001 # non-Hebbian weight term(actual spike trains)

# Initialize weight changes
dw = np.zeros((N_output, N_liquid))

# Update weights using ReSuMe rule
for j in range(N_output):  # Loop over output neurons
    Sl = np.array(spikemon_output.spike_trains()[j] / ms)
    print(f"Output Neuron{j} Sl:",Sl)
    Sd = np.unique(desired_spike_trains[j] / ms)
    print(f"Desired Neuron{j} Sd:",Sd)

    for i in range(N_liquid):  # Loop over liquid neurons
        Sin = np.array(spikemon_liquid.spike_trains()[i] / ms)
        dw_tmp = 0

        # desired output contribution to the weight change
        for td in Sd[:5]:
            dw_tmp += ad
            for sd in Sin - td:
                dw_tmp += resume_kernel_d(sd)

        # actual output contribution to the weight change
        for tl in Sl[:5]:
            dw_tmp -= al
            for sl in Sin - tl:
                dw_tmp += resume_kernel_l(sl)
    
        # Update the weight change
        dw[j, i] = dw_tmp

return dw / float(N_liquid)

But no matter how I adjust the expected spike trains or other parameters of the network, the synaptic weight cannot converge and the final output spike trains is far from the expected spike trains.
So I would like to ask if anyone has used similar algorithms or has relevant experience, please contact me. Thank you so much!!!

Hey @jrsydtm

Next time can you please post your entire code? Just like you did with

# Initialize weight changes
dw = np.zeros((N_output, N_liquid))

# Update weights using ReSuMe rule
...

That makes it easier for anyone who’s trying to understand your problem.

Now, AFAIK with ReSuMe you need to update your weights while your simulations is running, right? But in your code you are using SpikeMonitor, which means you ran your simulation and used the collected spikes to create dw. If that is the case, I don’t think this is going to work. Maybe you’re using regular operations or network operations, which would work, but that’s not clear from your code.

I also suggest that you create another state variable to represent the kernel. Calculating the kernel every time step will make your code slower.

The tricky part here is that a spike coming from another input (the desired spike) should change the weight between liquid and output. In this case you could use a custom event, so you’d have something like

neu_eqs = '...
           x : 1
           ...'
group = NeuronGroup(..., events={'custom_event': 'x > 0'})

Now your synapse can trigger another pathway when you have the event on the postsynaptic neuron

synapse = Synapses('...',
                     on_post={...
                             'custom_pathway': '...update w...'},
                     on_event={...
                               'custom_pathway': 'custom_event'})

You just need another synapse group (desired spikes) to set x and possibly reset it to zero at the end of the time step. Looks complicated but I couldn’t think of a simpler way of doing it.

Thank you very much for your answer! As you said, my current version of the code does adjust the weights after the simulation runs. The complete code for network construction and training is as follows:

# Function to calculate distance-based connection probability
def distance_probability(pos1, pos2, sigma=0.1):
    distance = np.linalg.norm(pos1 - pos2)
    return np.exp(-(distance**2) / (2 * sigma**2))


# Function to calculate Euclidean distance between two spike trains
def euclidean_distance(spike_train1, spike_train2):
    # Ensure both spike trains have the same length by padding with zeros
    length = max(len(spike_train1), len(spike_train2))
    spike_train1 = np.pad(spike_train1, (0, length - len(spike_train1)), "constant")
    spike_train2 = np.pad(spike_train2, (0, length - len(spike_train2)), "constant")
    return np.sqrt(np.sum((spike_train1 - spike_train2) ** 2))


# Function to save weights
def save_weights():
    return {
        "S_input_liquid": S_input_liquid.w[:].copy(),
        "S_liquid_internal": S_liquid_internal.w[:].copy(),
        "S_liquid_output": S_liquid_output.w[:].copy(),
    }


# Function to load weights
def load_weights(weights):
    S_input_liquid.w[:] = weights["S_input_liquid"]
    S_liquid_internal.w[:] = weights["S_liquid_internal"]
    S_liquid_output.w[:] = weights["S_liquid_output"]


# Plot weight changes over iterations
def plot_weight_changes(weights, title):
    weights = np.array(weights)
    plt.figure(figsize=(12, 6))
    plt.plot(weights)
    plt.title(title)
    plt.xlabel('Iteration')
    plt.ylabel('Weight')
    plt.show()


# Function to compute the exponential decay kernel Wd for the ReSuMe learning rule
def resume_kernel_d(sd):
    Ad = 0.1  # Learning rate constant
    taud = 20  # Learning time constant
    if sd > 0:
        return Ad * np.exp(-sd / taud)
    else:
        return 0


# Function to compute the exponential decay kernel Wl for the ReSuMe learning rule
def resume_kernel_l(sl):
    Al = 0.1  # Learning rate constant
    taul = 20  # Learning time constant
    if sl > 0:
        return -Al * np.exp(-sl / taul)
    else:
        return 0
    

# Function to update synaptic weights from liquid layer to output layer using the ReSuMe learning rule
def resume_output_weights():
    ad = 0.0001  # non-Hebbian weight term(desired spike trains)
    al = 0.0001  # non-Hebbian weight term(actual spike trains)

    # Initialize weight changes
    dw = np.zeros((N_output, N_liquid))

    # Update weights using ReSuMe rule
    for j in range(N_output):  # Loop over output neurons
        Sl = np.array(spikemon_output.spike_trains()[j] / ms)
        print(f"Output Neuron{j} Sl:",Sl)
        Sd = np.unique(desired_spike_trains[j] / ms)
        print(f"Desired Neuron{j} Sd:",Sd)

        for i in range(N_liquid):  # Loop over liquid neurons
            Sin = np.array(spikemon_liquid.spike_trains()[i] / ms)
            dw_tmp = 0

            # desired output contribution to the weight change
            for td in Sd[-5:]:
                dw_tmp += ad
                for sd in Sin - td:
                    dw_tmp += resume_kernel_d(sd)

            # actual output contribution to the weight change
            for tl in Sl[-5:]:
                dw_tmp -= al
                for sl in Sin - tl:
                    dw_tmp += resume_kernel_l(sl)
        
            # Update the weight change
            dw[j, i] = dw_tmp
    
    return dw / float(N_liquid)


# Define basic parameters
np.random.seed(42)
N_input = X_train.shape[1]
N_liquid = 2000
N_output = len(np.unique(y_train))
single_example_time = 1 * second
num_iterations = 1
run_time = num_iterations * single_example_time

# Neuron model parameters
Ee = 0.0 * mV
El = -74.0 * mV
vr = -60.0 * mV
vt = -54.0 * mV
taum = 100.0 * ms
taue = 65.0 * ms

# STDP parameters
taupre = 20.0 * ms
taupost = 20.0 * ms
gmax = 0.01
dApre = 0.01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax

# Define neuron equations
eqs_liquid = """
dv/dt = (ge * (Ee - v) + El - v)/taum : volt
dge/dt = -ge/taue : 1
"""
eqs_output = """
dv/dt = (ge * (Ee - v) + El - v)/(50 * ms) : volt
dge/dt = -ge/(1 * ms) : 1
"""

# Create input groups
input_neurons = PoissonGroup(N_input, 0 * Hz)
# Create teacher groups
teacher_neurons = SpikeGeneratorGroup(N_output, [], [] * ms)
# Define liquid neuron groups
liquid_neurons = NeuronGroup(N_liquid, eqs_liquid, threshold="v > vt", reset="v = vr", method="euler")
# Define output neuron groups
output_neurons = NeuronGroup(N_output, eqs_output, threshold="v > vt", reset="v = vr", method="euler")

# Define synapses between input layer and liquid layer
S_input_liquid = Synapses(input_neurons, liquid_neurons, "w : 1", on_pre="ge += w")
S_input_liquid.connect()
S_input_liquid.w = "rand() * gmax"

# Define synapses within the liquid layer based on distance
S_liquid_internal = Synapses(liquid_neurons, liquid_neurons, "w : 1", on_pre="ge_post += w")
# Assign positions to liquid neurons in 3D
positions = np.random.rand(N_liquid, 3)  # Uniformly distributed positions in a 3D space
# Connect based on distance probability in 3D
for i in range(N_liquid):
    for j in range(N_liquid):
        if i != j:
            prob = distance_probability(positions[i], positions[j])
            if np.random.rand() < prob:
                S_liquid_internal.connect(i=i, j=j)
S_liquid_internal.w = "rand() * gmax"

# Define synapses between liquid layer and output layer
S_liquid_output = Synapses(liquid_neurons, output_neurons, "w : 1", on_pre="ge_post += w")
S_liquid_output.connect()
S_liquid_output.w = "rand() * gmax"

# Create monitors
spikemon_input = SpikeMonitor(input_neurons)
spikemon_teacher = SpikeMonitor(teacher_neurons)
spikemon_liquid = SpikeMonitor(liquid_neurons)
spikemon_output = SpikeMonitor(output_neurons)

statemon_liquid = StateMonitor(liquid_neurons, ['v', 'ge'], record=True, dt=1 * ms)
statemon_output = StateMonitor(output_neurons, ['v', 'ge'], record=True, dt=1 * ms)

# statemon_input_liquid = StateMonitor(S_liquid_internal, "w", record=True)
# statemon_liquid_output = StateMonitor(S_liquid_output, "w", record=True)

# Define desired spike trains
desired_spikes = [750, 800, 850, 900, 950] * ms

# Store the initial state of the network
store()

# Lists to store weights over iterations
input_liquid_weights = []
liquid_internal_weights = []
liquid_output_weights = []

# Record the initial weights
input_liquid_weights.append(S_input_liquid.w[:].copy())
liquid_internal_weights.append(S_liquid_internal.w[:].copy())
liquid_output_weights.append(S_liquid_output.w[:].copy())

# Lists to store Euclidean distances for each output neuron
distances = {i: [] for i in range(N_output)}
iterations = []

for iteration in range(num_iterations):
    print(f"Iteration {iteration + 1}/{num_iterations}")
    # Resotre the original state of the network
    restore()

    # Assign values to the spike rates of input neurons
    input_spike_rates = X_train.iloc[0, :].values.astype(float) * 108.0
    input_neurons.rates = input_spike_rates * Hz

    # Assign desired spike trains of teacher neurons
    teacher = y_train.iloc[0].astype(int)
    teacher_neurons.set_spikes([teacher] * len(desired_spikes), desired_spikes)
    desired_spike_trains = np.zeros((N_output, len(desired_spikes))) * ms
    desired_spike_trains[teacher,:] = desired_spikes

    # Load weights from previous iteration
    if iteration > 0:
        load_weights(saved_weights)

    # Run the simulation
    run(single_example_time, report="text")

    # Update weights using ReSuMe
    dw = resume_output_weights()
    dw_flattened = dw.flatten()
    # print("dw_flattened shape:", dw_flattened.shape)
    # # Ensure shapes match
    # assert S_liquid_output.w[:].shape == dw_flattened.shape, "Shapes do not match after flattening dw"
    S_liquid_output.w[:] += dw_flattened
    S_liquid_output.w[:] = np.clip(S_liquid_output.w[:], 0, gmax)  # Limit the weights to [0, gmax]

    # Record the weights
    input_liquid_weights.append(S_input_liquid.w[:].copy())
    liquid_internal_weights.append(S_liquid_internal.w[:].copy())
    liquid_output_weights.append(S_liquid_output.w[:].copy())

    # Save current weights
    saved_weights = save_weights()

    # Calculate Euclidean distance
    for k in range(N_output):
        output_spike_trains = np.array(spikemon_output.spike_trains()[k] / ms)  # Remove units
        desired_single_spike_trains = desired_spikes / ms  # Remove units
        
        distance = euclidean_distance(output_spike_trains[:5], desired_single_spike_trains[:5])
        distances[k].append(distance)

        # Debugging info
        print(f"Output neuron{k}:")
        print("output spike trains:", output_spike_trains)
        print("desired spike trains", desired_single_spike_trains)
        print(distance)
    
    iterations.append(iteration)

But my previous version of the code did modify the weights while running the simulation. I also attached this version of the code below.

# Define basic parameters
np.random.seed(42)
N_input = X_train.shape[1]
N_liquid = 2000
N_output = len(np.unique(y_train))
single_example_time = 1 * second

# Neuron model parameters
Ee = 0.0 * mV
El = -74.0 * mV
v_reset = -60.0 * mV
v_thresh = -54.0 * mV
taum = 100.0 * ms
taue = 50.0 * ms

# STDP parameters
taupre = 20.0 * ms
taupost = 20.0 * ms
gmax = 0.01
dApre = 0.01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax

# ReSuMe parameters
taul = 20.0 * ms
taud = 20.0 * ms
dAd = 0.01
dAl = 0.01
dAl *= gmax
dAd *= gmax

# Define liquid neuron equations
eqs_neurons_liquid = """
dv/dt = (ge * (Ee - v) + El - v) / taum_l : volt
dge/dt = -ge / taue_l : 1
"""
# Define output neuron equations
eqs_neurons_output = """
dv/dt = (ge * (Ee - v) + El - v) / taum_o : volt
dge/dt = -ge / taue_o : 1
"""

# Create input groups
input_groups = PoissonGroup(N_input, 0 * Hz)
# Create teacher groups
teacher_groups = SpikeGeneratorGroup(N_output, [], [] * ms)
# Define liquid neuron groups
neurons_liquid = NeuronGroup(
    N_liquid,
    eqs_neurons_liquid,
    threshold="v > v_thresh",
    reset="v = v_reset",
    method="euler",
)
# Define output neuron groups
neurons_output = NeuronGroup(
    N_output,
    eqs_neurons_output,
    threshold="v > v_thresh",
    reset="v = v_reset",
    method="euler",
)

# Define synapses between input layer and liquid layer
S_input_to_liquid = Synapses(
    input_groups,
    neurons_liquid,
    """
    w : 1
    dApre/dt = -Apre / taupre : 1 (event-driven)
    dApost/dt = -Apost / taupost : 1 (event-driven)
    """,
    on_pre="""ge += w
              Apre += dApre
              w = clip(w + Apost, 0, gmax)""",
    on_post="""Apost += dApost
               w = clip(w + Apre, 0, gmax)""",
)
S_input_to_liquid.connect()
S_input_to_liquid.w = "rand() * gmax"

# Define synapses within the liquid layer based on distance
S_liquid_internal = Synapses(
    neurons_liquid, neurons_liquid, "w : 1", on_pre="ge_post += w"
)
# Assign positions to liquid neurons in 3D
positions = np.random.rand(N_liquid, 3)  # Uniformly distributed positions in a 3D space
# Connect based on distance probability in 3D
for i in range(N_liquid):
    for j in range(N_liquid):
        if i != j:
            prob = distance_probability(positions[i], positions[j])
            if np.random.rand() < prob:
                S_liquid_internal.connect(i=i, j=j)
S_liquid_internal.w = "rand() * gmax"

# Define synapses between liquid layer and output layer
S_liquid_to_output = Synapses(
    neurons_liquid,
    neurons_output,
    """
    w : 1
    dAl/dt = -Al / taul : 1 (event-driven)
    dAd/dt = -Ad / taud : 1 (event-driven)
    """,
    on_pre="""ge_post += w""",
    on_post="""Al += dAl; w = clip(w - Al, 0, gmax)""",
)
S_liquid_to_output.connect()
S_liquid_to_output.w = "rand() * gmax"

# Define synapses between liquid layer and teacher layer
S_liquid_to_teachers = Synapses(
    neurons_liquid,
    teacher_groups,
    " ",
    on_post="""Ad += dAd; w = clip(w + Ad, 0, gmax)""",
)
S_liquid_to_teachers.connect()
S_liquid_to_teachers.variables.add_reference("w", S_liquid_to_output, "w")
S_liquid_to_teachers.variables.add_reference("Al", S_liquid_to_output, "Al")
S_liquid_to_teachers.variables.add_reference("Ad", S_liquid_to_output, "Ad")

# Create monitors
spikemon_teacher = SpikeMonitor(teacher_groups)
spikemon_output = SpikeMonitor(neurons_output)

state_monitor_output = StateMonitor(neurons_output, True, record=True, dt=1 * second)
state_monitor_liquid_to_output = StateMonitor(
    S_liquid_to_output, "w", record=True, dt=1 * second
)

# Lists to store Euclidean distances for each output neuron
distances = {i: [] for i in range(N_output)}
iterations = []

# Define desired spike trains
desired_spikes = [0, 100, 200, 300, 400, 500, 600, 700, 800, 900] * ms

# Run multiple iterations of training
num_iterations = 1000
input_groups.rates = 0 * Hz
run(0 * second)

for iteration in range(num_iterations):
    input_spike_rates = X_train.iloc[0, :].values.astype(float) * 108.0
    input_groups.rates = input_spike_rates * Hz

    teacher = y_train.iloc[0].astype(int)
    teacher_spikes = desired_spikes + 1200 * iteration * ms
    teacher_groups.set_spikes([teacher] * len(teacher_spikes), teacher_spikes)

    run(single_example_time, report="text")

    # Calculate Euclidean distance
    for k in range(N_output):
        output_spike_trains = np.array(
            spikemon_output.spike_trains()[k] / ms
        )  # Convert to ms and remove units
        desired_spike_trains = desired_spikes / ms  # Convert to ms and remove units
        # Normalize spike times so that each sequence starts from 0
        if len(output_spike_trains) > 0:
            output_spike_trains -= output_spike_trains[0]
        output_spike_trains -= iteration * 1200
        actual_output_spike_trains = output_spike_trains[output_spike_trains > 0]
        print("output spike trains:", actual_output_spike_trains)
        print("desired spike trains", desired_spike_trains)
        distance = euclidean_distance(actual_output_spike_trains, desired_spike_trains)
        distances[k].append(distance)
    iterations.append(iteration)

But no matter which version, I can’t throw a satisfactory result, even far from my expectation, so I want to know whether it is a problem with my parameter settings or a problem with my code logic.

Thank you very much again for taking the time out of your busy schedule to answer my questions.

Great, that’s way better :slight_smile:

Now I see what you were doing with the monitor. I will go on with the second code provided because the first seems to work only in batches. The second one is more appropriate.

You’re potentiating weights from S_liquid_to_teachers. This should be happening with w in S_liquid_to_output. You need to update this weight every time the “desired-spikes” neuron fires, which is another group (teacher_groups in your code). And you don’t need to connect liquid neurons to teacher neurons; teacher neurons are supposed to be your “label”, independent from the liquid state machine. So I don’t understand why you’re doing this. My suggestion is to disconnect liquid and teachers and use custom events. It should be something like

eqs_neurons_output = """
dv/dt = (ge * (Ee - v) + El - v) / taum_o : volt
dge/dt = -ge / taue_o : 1
x : 1
"""
...
neurons_output = NeuronGroup(
    N_output,
    eqs_neurons_output,
    threshold="v > v_thresh",
    reset="v = v_reset",
    method="euler",
    events={'custom_event': 'x > 0'}
)
...
S_liquid_to_output = Synapses(
    neurons_liquid,
    neurons_output,
    """
    w : 1
    dAl/dt = -Al / taul : 1 (event-driven)
    dAd/dt = -Ad / taud : 1 (event-driven)
    """,
    on_pre="""ge_post += w""",
    on_post={'post': 'Al += dAl; w = clip(w - Al, 0, gmax)',
             'custom_pathway':  'Ad += dAd; w = clip(w + Ad, 0, gmax)'},
    on_event={'custom_pathway': 'custom_event'}
)

Note that X_train and y_train are missing, so I couldn’t test your script

Thank you very much for your suggestion. I need to learn how to use custom events first. I pasted my first line of data here to form a standalone code file as shown below.

from brian2 import *
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

# Initialize the simulation environment
start_scope()


# Function to calculate distance-based connection probability
def distance_probability(pos1, pos2, sigma=0.1):
    distance = np.linalg.norm(pos1 - pos2)
    return np.exp(-(distance**2) / (2 * sigma**2))


# Function to calculate Euclidean distance between two spike trains
def euclidean_distance(spike_train1, spike_train2):
    # Ensure both spike trains have the same length by padding with zeros
    length = max(len(spike_train1), len(spike_train2))
    spike_train1 = np.pad(spike_train1, (0, length - len(spike_train1)), "constant")
    spike_train2 = np.pad(spike_train2, (0, length - len(spike_train2)), "constant")
    return np.sqrt(np.sum((spike_train1 - spike_train2) ** 2))


# Function to save weights
def save_weights():
    return {
        "S_input_liquid": S_input_liquid.w[:].copy(),
        "S_liquid_internal": S_liquid_internal.w[:].copy(),
        "S_liquid_output": S_liquid_output.w[:].copy(),
    }


# Function to load weights
def load_weights(weights):
    S_input_liquid.w[:] = weights["S_input_liquid"]
    S_liquid_internal.w[:] = weights["S_liquid_internal"]
    S_liquid_output.w[:] = weights["S_liquid_output"]


# Plot weight changes over iterations
def plot_weight_changes(weights, title):
    weights = np.array(weights)
    plt.figure(figsize=(12, 6))
    plt.plot(weights)
    plt.title(title)
    plt.xlabel('Iteration')
    plt.ylabel('Weight')
    plt.show()


# Define basic parameters
np.random.seed(42)
N_input = 34
N_liquid = 2000
N_output = 4
single_example_time = 1 * second
num_iterations = 100
run_time = num_iterations * single_example_time

# Neuron model parameters
Ee = 0.0 * mV
El = -74.0 * mV
vr = -60.0 * mV
vt = -54.0 * mV
taum = 100.0 * ms
taue = 75.0 * ms

# STDP parameters
taupre = 20.0 * ms
taupost = 20.0 * ms
gmax = 0.01
dApre = 0.01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax

# ReSuMe parameters
taul = 20.0 * ms
taud = 20.0 * ms
dAd = 0.01
dAl = 0.01
dAl *= gmax
dAd *= gmax

# Define neuron equations
eqs_liquid = """
dv/dt = (ge * (Ee - v) + El - v)/taum : volt
dge/dt = -ge/taue : 1
"""
eqs_output = """
dv/dt = (ge * (Ee - v) + El - v)/(50 * ms) : volt
dge/dt = -ge/(1 * ms) : 1
"""

# Create input groups
input_neurons = PoissonGroup(N_input, 0 * Hz)
# Create teacher groups
teacher_neurons = SpikeGeneratorGroup(N_output, [], [] * ms)
# Define liquid neuron groups
liquid_neurons = NeuronGroup(N_liquid, eqs_liquid, threshold="v > vt", reset="v = vr", method="euler")
# Define output neuron groups
output_neurons = NeuronGroup(N_output, eqs_output, threshold="v > vt", reset="v = vr", method="euler")

# Define synapses between input layer and liquid layer
S_input_liquid = Synapses(input_neurons, liquid_neurons,
                          """w : 1
                          dApre/dt = -Apre / taupre : 1 (event-driven)
                          dApost/dt = -Apost / taupost : 1 (event-driven)""",
                          on_pre="""ge += w
                                    Apre += dApre
                                    w = clip(w + Apost, 0, gmax)""",
                          on_post="""Apost += dApost
                                     w = clip(w + Apre, 0, gmax)""",
                         )
S_input_liquid.connect()
S_input_liquid.w = "rand() * gmax"

# Define synapses within the liquid layer based on distance
S_liquid_internal = Synapses(liquid_neurons, liquid_neurons, "w : 1", on_pre="ge_post += w")
# Assign positions to liquid neurons in 3D
positions = np.random.rand(N_liquid, 3)  # Uniformly distributed positions in a 3D space
# Connect based on distance probability in 3D
for i in range(N_liquid):
    for j in range(N_liquid):
        if i != j:
            prob = distance_probability(positions[i], positions[j])
            if np.random.rand() < prob:
                S_liquid_internal.connect(i=i, j=j)
S_liquid_internal.w = "rand() * gmax"

# Define synapses between liquid layer and output layer
S_liquid_output = Synapses(liquid_neurons, output_neurons,
                           """w : 1
                           dAl/dt = -Al / taul : 1 (event-driven)
                           dAd/dt = -Ad / taud : 1 (event-driven)""",
                           on_pre="""ge_post += w""",
                           on_post="""Al += dAl; w = clip(w - Al, 0, gmax)""",
                           )
S_liquid_output.connect()
S_liquid_output.w = "rand() * gmax"

# Define synapses between liquid layer and teacher layer
S_liquid_to_teachers = Synapses(liquid_neurons, teacher_neurons, on_post="""Ad += dAd; w = clip(w + Ad, 0, gmax)""")
S_liquid_to_teachers.connect()
S_liquid_to_teachers.variables.add_reference("w", S_liquid_output, "w")
S_liquid_to_teachers.variables.add_reference("Al", S_liquid_output, "Al")
S_liquid_to_teachers.variables.add_reference("Ad", S_liquid_output, "Ad")


# Create monitors
spikemon_input = SpikeMonitor(input_neurons)
spikemon_teacher = SpikeMonitor(teacher_neurons)
spikemon_liquid = SpikeMonitor(liquid_neurons)
spikemon_output = SpikeMonitor(output_neurons)

statemon_liquid = StateMonitor(liquid_neurons, ['v', 'ge'], record=True, dt=1 * ms)
statemon_output = StateMonitor(output_neurons, ['v', 'ge'], record=True, dt=1 * ms)

statemon_input_liquid = StateMonitor(S_liquid_internal, "w", record=True, dt=1 * ms)
statemon_liquid_output = StateMonitor(S_liquid_output, "w", record=True, dt=1 * ms)

# Define desired spike trains
desired_spikes = [0, 100, 200, 300, 400, 500, 600, 700, 800, 900] * ms

# Store the initial state of the network
store()

# Lists to store weights over iterations
input_liquid_weights = []
liquid_internal_weights = []
liquid_output_weights = []

# Record the initial weights
input_liquid_weights.append(S_input_liquid.w[:].copy())
liquid_internal_weights.append(S_liquid_internal.w[:].copy())
liquid_output_weights.append(S_liquid_output.w[:].copy())

# Lists to store Euclidean distances for each output neuron
distances = {i: [] for i in range(N_output)}
iterations = []

for iteration in range(num_iterations):
    print(f"Iteration {iteration + 1}/{num_iterations}")
    # Resotre the original state of the network
    restore()

    # Assign values to the spike rates of input neurons
    input_spike_rates = [
    0.545045, 0.418259, 0.529397, 0.4412, 0.235756, 0.593037, 0.3422,
    0.243706, 0.25275, 0.409673, 0.249268, 0.605933, 0.400799, 0.452672,
    0.468278, 0.437417, 0.592432, 0.5261, 0.549521, 0.445755, 0.594073,
    0.568477, 0.386146, 0.604974, 0.1911, 0.208305, 0.192839, 0.19107,
    0.190039, 0.204621, 0.155896, 0.280053, 0.353998, 0.558399
    ]
    # Scale each element in the list by 108.0
    scaled_input_spike_rates = [rate * 108.0 for rate in input_spike_rates]
    input_neurons.rates = scaled_input_spike_rates * Hz

    # Assign desired spike trains of teacher neurons
    teacher = 0
    teacher_spikes = desired_spikes
    teacher_neurons.set_spikes([teacher] * len(teacher_spikes), teacher_spikes)

    # Load weights from previous iteration
    if iteration > 0:
        load_weights(saved_weights)

    # Run the simulation
    run(single_example_time, report="text")

    # Record the weights
    input_liquid_weights.append(S_input_liquid.w[:].copy())
    liquid_internal_weights.append(S_liquid_internal.w[:].copy())
    liquid_output_weights.append(S_liquid_output.w[:].copy())

    # Save current weights
    saved_weights = save_weights()

    # Calculate Euclidean distance
    for k in range(N_output):
        output_spike_trains = np.array(spikemon_output.spike_trains()[k] / ms)  # Remove units
        desired_single_spike_trains = desired_spikes / ms  # Remove units
        
        distance = euclidean_distance(output_spike_trains[:5], desired_single_spike_trains[:5])
        distances[k].append(distance)

        # Debugging info
        print(f"Output neuron{k}:")
        print("output spike trains:", output_spike_trains)
        print("desired spike trains", desired_single_spike_trains)
        print(distance)
    
    iterations.append(iteration)

# Plot voltage change of liquid neurons
figure()
for i in range(N_liquid):
    plot(statemon_liquid.t / ms, statemon_liquid.v[i], label=f"Neuron {i}")
xlabel("Time (ms)")
ylabel("Voltage (mV)")
title("Voltage Change of Liquid Neuron")
legend()
show()
# Plot ge change of liquid neurons
figure()
for i in range(N_liquid):
    plot(statemon_liquid.t / ms, statemon_liquid.ge[i], label=f"Neuron {i}")
xlabel("Time (ms)")
ylabel("ge")
title("ge of Liquid Neuron")
legend()
show()
# Plot voltage change of output neurons
figure()
for i in range(N_output):
    plot(statemon_output.t / ms, statemon_output.v[i], label=f"Neuron {i}")
xlabel("Time (ms)")
ylabel("Voltage (mV)")
title("Voltage Change of Output Neuron")
legend()
show()
# Plot ge change of output neurons
figure()
for i in range(N_output):
    plot(statemon_output.t / ms, statemon_output.ge[i], label=f"Neuron {i}")
xlabel("Time (ms)")
ylabel("ge")
title("ge of Output Neuron")
legend()
show()

# Plot synaptic connectivity and weights
fig, axs = plt.subplots(6, 1, figsize=(12, 24))
axs[0].plot(S_input_liquid.w, ".k")
axs[0].set_ylabel("Weight")
axs[0].set_xlabel("Synapse index")
axs[0].set_title("Synaptic Connectivity: Input to Liquid Layer")
axs[1].hist(S_input_liquid.w, 20)
axs[1].set_xlabel("Weight")
axs[1].set_title("Weight Distribution: Input to Liquid Layer")
axs[2].plot(S_liquid_internal.w, ".k")
axs[2].set_ylabel("Weight")
axs[2].set_xlabel("Synapse index")
axs[2].set_title("Synaptic Connectivity: Liquid Internal Layer")
axs[3].hist(S_liquid_internal.w, 20)
axs[3].set_xlabel("Weight")
axs[3].set_title("Weight Distribution: Liquid Internal Layer")
axs[4].plot(S_liquid_output.w, ".k")
axs[4].set_ylabel("Weight")
axs[4].set_xlabel("Synapse index")
axs[4].set_title("Synaptic Connectivity: Liquid to Output Layer")
axs[5].hist(S_liquid_output.w, 20)
axs[5].set_xlabel("Weight")
axs[5].set_title("Weight Distribution: Liquid to Output Layer")
plt.tight_layout()
plt.show()

# Plot the weight changes
plot_weight_changes(input_liquid_weights, "Input to Liquid Layer Weights Over Iterations")
plot_weight_changes(liquid_output_weights, "Liquid to Output Layer Weights Over Iterations")

# Plot Euclidean distances over iterations
figure()
for k in range(N_output):
    plot(iterations, distances[k], label=f"Neuron {k}")
xlabel("Iteration")
ylabel("Euclidean Distance")
title("Euclidean Distance Between Output and Teacher Spike Trains Over Iterations")
legend()
show()

I chose to connect the synapses between the teacher layer and the liquid layer so that the synaptic weights can be modified when the teacher layer neurons pulse. It is worth noting that the synapses between the teacher layer and the liquid layer do not transmit ge to either side. This is how I want to implement the ReSuMe learning rules logic.