Having a runtime error in my stateMonitor for brian2cuda

Description of problem

I am trying to run brian2cuda on my neuron model but I am getting a parallelization error coupled with an error in my state monitors. The error doesn’t seem caused by compatibility since a simpler demo file works fine and it isn’t inherent to the code either because when I run without brian2cuda the code will compile and run fine. I was wondering if I could get some advice as to what could be going wrong in my code. Thank you in advance for your help.

Minimal code to reproduce problem

This is my brian2cuda implementation

print(brian2cuda.__version__)

print(brian2cuda.cuda_prefs)

set_device("cuda_standalone")

prefs.devices.cuda_standalone.cuda_backend.cuda_path = 'micromamba/envs/cuda_env'

This is the code in that has the error in question

def _create_State_Monitors(num_group):
  stateMonitors = []
  for i in num_group:
    state = StateMonitor(i, 'v', record = True)
    stateMonitors.append(state)
  return stateMonitors

What you have aready tried

I have tried running the code without brian2cuda and the program doesn’t have this error and runs to completion. I also ran a simplified program with just a simple neuron model and it ran and compiled as well. Here is the simplified code

import brian2cuda
import brian2
from brian2 import *

set_device('cuda_standalone')
prefs.devices.cuda_standalone.cuda_backend.cuda_path = '/home/stefano/micromamba/envs/cuda_env'

N = 1000
tau = 10*ms
eqs = '''
dv/dt = (v0 - v) / tau : volt (unless refractory)
v0 : volt
'''
group = NeuronGroup(N, eqs, threshold='v > 10*mV', reset='v = 0*mV',
                    refractory=5*ms, method='exact')

run(50*ms)

Full traceback of error (if relevant)

Traceback (most recent call last):
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/cuda_generator.py", line 403, in translate_one_statement_sequence
    raise ParallelisationError()
brian2cuda.cuda_generator.ParallelisationError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/network.py", line 1003, in before_run
    obj.before_run(run_namespace)
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py", line 1266, in before_run
    self.create_code_objects(run_namespace)
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py", line 1259, in create_code_objects
    code_object = self.create_default_code_object(run_namespace)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py", line 1240, in create_default_code_object
    self.codeobj = create_runner_codeobj(
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/codeobject.py", line 484, in create_runner_codeobj
    return device.code_object(
           ^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py", line 423, in code_object
    codeobj = super(CUDAStandaloneDevice, self).code_object(owner, name, abstract_code, variables,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/devices/cpp_standalone/device.py", line 704, in code_object
    codeobj = super().code_object(
              ^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/devices/device.py", line 324, in code_object
    scalar_code, vector_code, kwds = generator.translate(
                                     ^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/generators/base.py", line 303, in translate
    translated = self.translate_statement_sequence(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/generators/base.py", line 146, in translate_statement_sequence
    vector_code[name] = self.translate_one_statement_sequence(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/cuda_generator.py", line 412, in translate_one_statement_sequence
    lines += self.translate_to_read_arrays(read, write, indices)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/cuda_generator.py", line 349, in translate_to_read_arrays
    line = line + self.get_array_name(var) + '[' + index_var + '];'
                  ^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/cuda_generator.py", line 277, in get_array_name
    return device.get_array_name(var, access_data=True, prefix=prefix)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py", line 182, in get_array_name
    array_name = self.arrays[var]
                 ~~~~~~~~~~~^^^^^
KeyError: <ArrayVariable(dimensions=metre ** 2 * kilogram * second ** -3 * amp ** -1,  dtype=float64, scalar=False, constant=False, read_only=False)>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "Kopsich_Neuron_Model-1/Scripts/FullyAbstractedKopsichModel.py", line 44, in <module>
    net.run(runtime, report = 'text')
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/base.py", line 344, in device_override_decorated_function
    return getattr(curdev, name)(*args, **kwds)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py", line 1546, in network_run
    net.before_run(namespace)
  File "micromamba/envs/cuda_env/lib/python3.12 ", line 346, in device_override_decorated_function
    return func(*args, **kwds)
           ^^^^^^^^^^^^^^^^^^^
  packages/brian2/core/network.py", line 1005, in before_run
    raise BrianObjectException(
brian2.core.base.BrianObjectException: Error encountered with object named 'statemonitor'.
Object was created here (most recent call only, full details in debug log):
 line 6, in _create_State_Monitors
    state = StateMonitor(i, 'v', record = True)

Hi @spoma33, thank you for your report. Unfortunately, I’m having a hard time to understand what happened here – I’d say it’s most likely a bug in brian2cuda! From the error message, it seems to have trouble to find the internal record of the v variable, but I don’t quite see why. I don’t think I can debug this further without code that actually reproduces the problem. I don’t think the create_State_Monitors function is to blame, at least if I use it in an extended version of your simple example, everything seems to work fine – you didn’t mention whether the groups you record from are actual groups or subgroups, so I tried both:

set_device('cuda_standalone')

N = 1000
tau = 10*ms
eqs = '''
dv/dt = (v0 - v) / tau : volt (unless refractory)
v0 : volt
'''
group1 = NeuronGroup(N, eqs, threshold='v > 10*mV', reset='v = 0*mV',
                    refractory=5*ms, method='exact')
group2 = NeuronGroup(N, eqs, threshold='v > 10*mV', reset='v = 0*mV',
                    refractory=5*ms, method='exact')

net = Network(group1, group2)
net.add(_create_State_Monitors([group1, group2]))
net.add(_create_State_Monitors([group1[:N//2], group1[N//2:]]))
net.run(50*ms)

This runs without any error, so I don’t know what I should do next to try to reproduce the issue.

Two recommendations for future reports of this kind:

  1. Very good reflex to print the brian2cuda version and the preferences, but it would be helpful to also include the output :wink: Regarding the preferences, they are unfortunately not that easy to print (you are printing the module that defines them, not the actual preferences) – but if you are only changing the cuda_path, this should not be related to this issue.
  2. When you include code, please wrap them in triple backticks like in the following example, otherwise they are hard to read and to copy&paste:
```
# A comment
print("Python code")
```

Thanks :pray: !

Oh okay I will keep that in mind when I ask again. If I have a large block of code should I wrap smaller subsets up to a certain number of lines in backticks or should I just wrap it once?

Also do you know if there are any websites where could I find out more about brian2cuda and brian2 interactions to see if there is something wrong with my code’s setup for brian2cuda?

It doesn’t matter much, but I’d split it up into smaller chunks if you want to “walk us through” the code, by commenting each chunk, and use one big chunk otherwise. If the code gets too big, it will be put into a scrollable box automatically, so don’t worry about pasting a long source file, it will not take up all the screen. If you want to get fancy, you can also use something like

<details>
<summary>The code</summary>

```
print("my code")
```

</details>

which gets rendered as

The code
print("my code")

(Note that the empty line after </summary> is necessary for the syntax highlighting to work – HTML in markdown is sometimes a bit fiddly).

I don’t know of any other place – this very discussion forum is all we got, I’m afraid. Please note that brian2cuda is not as stable and widely used as the rest of Brian, this is why we still consider it an “alpha” version.

Okay I was able to create a simplified model with a similar error to the original

My current brian2 version is 2.7.1, brian2cuda is 1.0a6, gcc is 13.3, g++ is 13.3, and nvcc is 11.6

The error message is:
INFO CUDA installation directory given via preference prefs.devices.cuda_standalone.cuda_backend.cuda_path=/home/stefano/micromamba/envs/cuda_env [brian2cuda.utils.gputools]
ERROR Brian 2 encountered an unexpected error. If you think this is a bug in Brian 2, please report this issue either to the discourse forum at http://brian.discourse.group/, or to the issue tracker at https://github.com/brian-team/brian2/issues. Please include this file with debug information in your report: /tmp/brian_debug_9q05_vrm.log Additionally, you can also include a copy of the script that was run, available at: /tmp/brian_script_v4ubv1xz.py You can also include a copy of the redirected std stream outputs, available at ‘/tmp/brian_stdout_tl7uvxm1.log’ and ‘/tmp/brian_stderr_nezqgstg.log’. Thanks! [brian2]
Traceback (most recent call last):
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/network.py”, line 1003, in before_run
obj.before_run(run_namespace)
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/synapses/synapses.py”, line 174, in before_run
super().before_run(run_namespace)
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py”, line 1266, in before_run
self.create_code_objects(run_namespace)
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py”, line 1259, in create_code_objects
code_object = self.create_default_code_object(run_namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/groups/group.py”, line 1240, in create_default_code_object
self.codeobj = create_runner_codeobj(
^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/codeobject.py”, line 484, in create_runner_codeobj
return device.code_object(
^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py”, line 423, in code_object
codeobj = super(CUDAStandaloneDevice, self).code_object(owner, name, abstract_code, variables,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/devices/cpp_standalone/device.py”, line 704, in code_object
codeobj = super().code_object(
^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/devices/device.py”, line 324, in code_object
scalar_code, vector_code, kwds = generator.translate(
^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/generators/base.py”, line 303, in translate
translated = self.translate_statement_sequence(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/codegen/generators/base.py”, line 150, in translate_statement_sequence
kwds = self.determine_keywords()
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/cuda_generator.py”, line 723, in determine_keywords
array_name = device.get_array_name(var)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py”, line 182, in get_array_name
array_name = self.arrays[var]
~~~~~~~~~~~^^^^^
KeyError: <ArrayVariable(dimensions=amp, dtype=float64, scalar=False, constant=False, read_only=False)>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/stefano/Kopsich_Neuron_Model-1/Scripts/brian2cudaTestRun.py”, line 27, in
net.run(50*ms)
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/base.py”, line 344, in device_override_decorated_function
return getattr(curdev, name)(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2cuda/device.py”, line 1546, in network_run
net.before_run(namespace)
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/base.py”, line 346, in device_override_decorated_function
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File “/home/stefano/micromamba/envs/cuda_env/lib/python3.12/site-packages/brian2/core/network.py”, line 1005, in before_run
raise BrianObjectException(
brian2.core.base.BrianObjectException: Error encountered with object named ‘AxoAxonic_Pyramidal_synapse_summed_variable_Isyn_AxoAxonic_post’.
Object was created here (most recent call only, full details in debug log):
File ‘/home/stefano/Kopsich_Neuron_Model-1/Scripts/CreateSynapse.py’, line 104, in group_synapse
s = Synapses(ng_pre, ng_post, model = model, method = ‘euler’,

An error occurred when preparing an object. (See above for original error message and traceback.)

The main file's code
import create_Neurons
from SpikeMonitorer import plot_SpikeMonitors
import CreateSynapse
from brian2 import *
from create_Neurons import pop_names
import brian2cuda
import brian2

print(brian2cuda.__version__)
print(brian2.__version__)
set_device('cuda_standalone')
prefs.devices.cuda_standalone.cuda_backend.cuda_path = '/home/stefano/micromamba/envs/cuda_env'

net = Network()

num_synapse, num_group = CreateSynapse.group_synapse()

net.add(num_group)
net.add(num_synapse)

net.run(50*ms)
The synapse code
from brian2 import *
from create_Neurons import _create_Neurons, getNeuronParams
import numpy as np
pop_names = ["Pyramidal","AxoAxonic","Basket","BasketCCK","Bistratified","Ivy","MFAOrden","QuadDLM","Granule"]
num_group = _create_Neurons()
neuron_params = getNeuronParams()
synaptic_params = {
    (num_group[0], num_group[0]) : {"wl" : 0.0, "wm" :  0.55, "wu" :  1.55, "g": 0.3, "ts": 10.22 * ms, "tau_x" : 318.51 * ms, "tau_u": 21.45 * ms, "U": 0.28, "p" : 0.025, "delayer" : 2, "tau_uInv" : (1000/21.45) * Hz, "tau_xInv" : (1000/318.51) * Hz, "tsInv" : (1000/10.22) * Hz},
    (num_group[0], num_group[1]) : {"wl" : 0.0, "wm" :  0.7, "wu" :  1.7, "g": 0.65, "ts" : 4.92 * ms, "tau_x": 630.73 * ms, "tau_u": 26.26 * ms, "U": 0.26, "p" : 0.015, "delayer" : 2, "tau_uInv" : (1000/26.26) * Hz, "tau_xInv" : (1000/630.73) * Hz, "tsInv" : (1000/4.92) * Hz},
    (num_group[0], num_group[2]) : {"wl" : 0.0, "wm" :  1.45, "wu" :  2.45, "g": 1.7, "ts": 3.97 * ms, "tau_x": 691.42 * ms, "tau_u": 21.16 * ms, "U": 0.12, "p" : 0.020, "delayer" : 2, "tau_uInv" : (1000/21.16) * Hz, "tau_xInv" : (1000/691.42) * Hz, "tsInv" : (1000/3.97) * Hz},
    (num_group[0], num_group[3]) : {"wl" : 0.0, "wm" :  1.0, "wu" :  2.0, "g" :  0.85, "ts" : 4.29 * ms, "tau_x" : 530.40 * ms, "tau_u" : 22.45 * ms, "U" : 0.2, "p" : 0.017, "delayer" : 2, "tau_uInv" : (1000/22.45) * Hz, "tau_xInv" : (1000/530.40) * Hz, "tsInv" : (1000/4.29) * Hz},
    (num_group[0], num_group[4]) : {"wl" : 0.0, "wm" :  0.7, "wu" :  1.7, "g" :  0.62, "ts" : 5.37 * ms, "tau_x" : 569.15 * ms, "tau_u" : 23.85 * ms, "U" : 0.26, "p" : 0.016, "delayer" : 2, "tau_uInv" : (1000/23.85) * Hz, "tau_xInv" : (1000/569.15) * Hz, "tsInv" : (1000/5.37) * Hz},
    (num_group[0], num_group[5]) : {"wl" : 0.0, "wm" :  1.35, "wu" :  2.35, "g" :  1.77, "ts" : 5.67 * ms, "tau_x" : 552.27 * ms, "tau_u" : 26.73 * ms, "U" : 0.12, "p" : 0.025, "delayer" : 2, "tau_uInv" : (1000/26.73) * Hz, "tau_xInv" : (1000/552.27) * Hz, "tsInv" : (1000/5.67) * Hz},
    (num_group[0], num_group[6]) : {"wl" : 0.0, "wm" :  1.25, "wu" :  2.25, "g" :  1.10, "ts" : 5.95 * ms, "tau_x" : 444.99 * ms, "tau_u" : 29.01 * ms, "U" : 0.15, "p" : 0.021, "delayer" : 2, "tau_uInv" : (1000/29.01) * Hz, "tau_xInv" : (1000/444.99) * Hz, "tsInv" : (1000/5.95) * Hz},
    (num_group[0], num_group[7]) : {"wl" : 0.0, "wm" :  1.25, "wu" :  2.25, "g" :  1.09, "ts" : 5.82 * ms, "tau_x" : 453.29 * ms, "tau_u" : 27.16 * ms, "U" : 0.15, "p" : 0.013, "delayer" : 2, "tau_uInv" : (1000/27.16) * Hz, "tau_xInv" : (1000/453.29) * Hz, "tsInv" : (1000/5.82) * Hz},
    (num_group[1], num_group[0]) : {"wl" : 0.0,"wm" : 1.45, "wu" :  2.45, "g" :  2.71, "ts" : 7.62 * ms, "tau_x" : 361.03 * ms, "tau_u" : 12.93 * ms, "U" : 0.13, "p" : 0.15, "delayer" : 1, "tau_uInv" : (1000/12.93) * Hz, "tau_xInv" : (1000/361.03) * Hz, "tsInv" : (1000/7.62) * Hz},
    (num_group[2], num_group[0]) : {"wl" : 0.0, "wm" :  1.45, "wu" :  2.45, "g" :  2.28, "ts" : 7.64 * ms, "tau_x" : 384.34 * ms, "tau_u" : 16.74 * ms, "U" : 0.13, "p" : 0.15, "delayer" : 1, "tau_uInv" : (1000/16.74) * Hz, "tau_xInv" : (1000/384.34) * Hz, "tsInv" : (1000/7.64) * Hz},
    (num_group[2], num_group[1]) : {"wl" : 0.0, "wm" :  1.3, "wu" :  2.3, "g" :  2.63, "ts" : 3.80 * ms, "tau_x" : 725.03 * ms, "tau_u" : 23.21 * ms, "U" : 0.19, "p" : 0.025, "delayer" : 1, "tau_uInv" : (1000/23.21) * Hz, "tau_xInv" : (1000/752.03) * Hz, "tsInv" : (1000/3.80) * Hz},
    (num_group[2], num_group[2]) : {"wl" : 0.0, "wm" :  0.55, "wu" :  1.55, "g" :  1.80, "ts" : 3.01 * ms, "tau_x" : 689.51 * ms, "tau_u" : 11.19 * ms, "U" : 0.39, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/11.19) * Hz, "tau_xInv" : (1000/689.51) * Hz, "tsInv" : (1000/3.01) * Hz},
    (num_group[2], num_group[3]) : {"wl" : 0.0, "wm" :  1.0, "wu" :  2.0, "g" :  1.69, "ts" : 4.21 * ms, "tau_x" : 636.76 * ms, "tau_u" : 16.72 * ms, "U" : 0.24, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/16.72) * Hz, "tau_xInv" : (1000/636.76) * Hz, "tsInv" : (1000/4.21) * Hz},
    (num_group[2], num_group[4]) : {"wl" : 0.0, "wm" :  1.3, "wu" :  2.3, "g" :  2.30, "ts" : 4.72 * ms, "tau_x" : 680.33 * ms, "tau_u" : 16.72 * ms, "U" : 0.18, "p" : 0.025, "delayer" : 1, "tau_uInv" : (1000/16.72) * Hz, "tau_xInv" : (1000/680.33) * Hz, "tsInv" : (1000/4.72) * Hz},
    (num_group[2], num_group[6]) : {"wl" : 0.0, "wm" :  0.75, "wu" :  1.75, "g" :  1.36, "ts" : 5.23 * ms, "tau_x" : 581.94 * ms, "tau_u" : 19.60 * ms, "U" : 0.30, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/19.60) * Hz, "tau_xInv" : (1000/581.94) * Hz, "tsInv" : (1000/5.23) * Hz},
    (num_group[2], num_group[7]) : {"wl" : 0.0, "wm" :  0.75, "wu" :  1.75, "g" :  1.31, "ts" : 5.16 * ms, "tau_x" : 589.20 * ms, "tau_u" : 19.31 * ms, "U" : 0.31, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/19.31) * Hz, "tau_xInv" : (1000/589.20) * Hz, "tsInv" : (1000/5.16) * Hz},
    (num_group[3], num_group[0]) : {"wl" : 0.0, "wm" :  0.65, "wu" :  1.65, "g" :  1.89, "ts" : 9.10 * ms, "tau_x" : 376.67 * ms, "tau_u" : 13.76 * ms, "U" : 0.08, "p" : 0.15, "delayer" : 1, "tau_uInv" : (1000/13.76) * Hz, "tau_xInv" : (1000/376.67) * Hz, "tsInv" : (1000/9.10) * Hz},
    (num_group[3], num_group[1]) : {"wl" : 0.0, "wm" :  1.3, "wu" :  2.3, "g" :  1.94, "ts" : 5.44 * ms, "tau_x" : 477.43 * ms, "tau_u" : 18.50 * ms, "U" : 0.12, "p" : 0.025, "delayer" : 1, "tau_uInv" : (1000/18.50) * Hz, "tau_xInv" : (1000/477.73) * Hz, "tsInv" : (1000/5.44) * Hz},
    (num_group[3], num_group[2]) : {"wl" : 0.0, "wm" :  0.55, "wu" :  1.55, "g" :  0.96, "ts" : 4.69 * ms, "tau_x" : 505.12 * ms, "tau_u" : 14.86 * ms, "U" : 0.28, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/14.86) * Hz, "tau_xInv" : (1000/505.12) * Hz, "tsInv" : (1000/4.69) * Hz},
    (num_group[3], num_group[3]) : {"wl" : 0.0, "wm" :  1.0, "wu" :  2.0, "g" :  0.97, "ts" : 4.89 * ms, "tau_x" : 283.28 * ms, "tau_u" : 23.38 * ms, "U" : 0.12, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/23.38) * Hz, "tau_xInv" : (1000/283.38) * Hz, "tsInv" : (1000/4.89) * Hz},
    (num_group[3], num_group[4]) : {"wl" : 0.0, "wm" :  1.3, "wu" :  2.3, "g" :  1.78, "ts" : 5.97 * ms, "tau_x" : 478.31 * ms, "tau_u" : 15.25 * ms, "U" : 0.13, "p" : 0.025, "delayer" : 1, "tau_uInv" : (1000/15.25) * Hz, "tau_xInv" : (1000/478.31) * Hz, "tsInv" : (1000/5.97) * Hz},
    (num_group[3], num_group[6]) : {"wl" : 0.0, "wm" :  0.75, "wu" :  1.75, "g" :  1.02, "ts" : 6.54 * ms, "tau_x" : 421.42 * ms, "tau_u" : 17.84 * ms, "U" : 0.21, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/17.84) * Hz, "tau_xInv" : (1000/421.42) * Hz, "tsInv" : (1000/6.54) * Hz},
    (num_group[3], num_group[7]) : {"wl" : 0.0, "wm" :  0.75, "wu" :  1.75, "g" :  1.00, "ts" : 6.48 * ms, "tau_x" : 398.15 * ms, "tau_u" : 17.34 * ms, "U" : 0.22, "p" : 0.025, "delayer" : 1, "tau_uInv" : (1000/17.34) * Hz, "tau_xInv" : (1000/389.15) * Hz, "tsInv" : (1000/6.48) * Hz},
    (num_group[4], num_group[0]) : {"wl" : 0.0, "wm" :  1.45, "wu" :  2.45, "g" :  2.08, "ts" : 7.49 * ms, "tau_x" : 481.15 * ms, "tau_u" : 16.61 * ms, "U" : 0.12, "p" : 0.028, "delayer" : 1, "tau_uInv" : (1000/16.61) * Hz, "tau_xInv" : (1000/481.15) * Hz, "tsInv" : (1000/7.49) * Hz},
    (num_group[4], num_group[1]) : {"wl" : 0.0, "wm" : 1.3 , "wu" :  2.3, "g" :  2.15, "ts" : 4.57 * ms, "tau_x" : 686.28 * ms, "tau_u" : 19.16 * ms, "U" : 0.17, "p" : 0.007, "delayer" : 1, "tau_uInv" : (1000/19.16) * Hz, "tau_xInv" : (1000/686.28) * Hz, "tsInv" : (1000/4.57) * Hz},
    (num_group[4], num_group[2]) : {"wl" : 0.0 , "wm" :  0.55 , "wu" :  1.55, "g" :  1.10, "ts" : 3.86 * ms, "tau_x" : 695.21 * ms, "tau_u" : 14.60 * ms, "U" : 0.37, "p" : 0.009, "delayer" : 1, "tau_uInv" : (1000/14.60) * Hz, "tau_xInv" : (1000/695.21) * Hz, "tsInv" : (1000/3.86) * Hz},
    (num_group[4], num_group[3]) : {"wl" : 0.0 , "wm" :  1.0 , "wu" :  2.0, "g" :  1.44, "ts" : 4.58 * ms, "tau_x" : 592.19 * ms, "tau_u" : 17.69 * ms, "U" : 0.22, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/17.69) * Hz, "tau_xInv" : (1000/592.19) * Hz, "tsInv" : (1000/4.58) * Hz},
    (num_group[4], num_group[4]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  2.01, "ts" : 4.58 * ms, "tau_x" : 775.04 * ms, "tau_u" : 13.60 * ms, "U" : 0.17, "p" : 0.033, "delayer" : 1, "tau_uInv" : (1000/13.60) * Hz, "tau_xInv" : (1000/775.04) * Hz, "tsInv" : (1000/4.58) * Hz},
    (num_group[4], num_group[5]) : {"wl" : 0.0 , "wm" : 0.65 , "wu" :  1.65, "g" :  1.34, "ts" : 5.33 * ms, "tau_x" : 649.83 * ms, "tau_u" : 18.17 * ms, "U" : 0.30, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/18.17) * Hz, "tau_xInv" : (1000/649.83) * Hz, "tsInv" : (1000/5.33) * Hz},
    (num_group[4], num_group[6]) : {"wl" : 0.0 , "wm" :  1.0 , "wu" :  2.0, "g" :  1.57, "ts" : 5.54 * ms, "tau_x" : 605.25 * ms, "tau_u" : 18.30 * ms, "U" : 0.29, "p" : 0.009, "delayer" : 1, "tau_uInv" : (1000/18.30) * Hz, "tau_xInv" : (1000/605.25) * Hz, "tsInv" : (1000/5.54) * Hz},
    (num_group[4], num_group[7]) : {"wl" : 0.0 , "wm" :  0.75 , "wu" :  1.75, "g" :  1.12, "ts" : 5.53 * ms, "tau_x" : 594.33 * ms, "tau_u" : 17.89 * ms, "U" : 0.30, "p" : 0.008, "delayer" : 1, "tau_uInv" : (1000/17.89) * Hz, "tau_xInv" : (1000/594.33) * Hz, "tsInv" : (1000/5.53) * Hz},
    (num_group[5], num_group[0]) : {"wl" : 0.0 , "wm" :  1.45 , "wu" :  2.45, "g" :  2.23, "ts" : 9.01 * ms, "tau_x" : 439.50 * ms, "tau_u" : 23.01 * ms, "U" : 0.12, "p" : 0.072, "delayer" : 1, "tau_uInv" : (1000/23.01) * Hz, "tau_xInv" : (1000/439.50) * Hz, "tsInv" : (1000/9.01) * Hz},
    (num_group[5], num_group[1]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  2.29, "ts" : 5.67 * ms, "tau_x" : 651.64 * ms, "tau_u" : 25.51 * ms, "U" : 0.17, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/25.51) * Hz, "tau_xInv" : (1000/651.64) * Hz, "tsInv" : (1000/5.67) * Hz},
    (num_group[5], num_group[2]) : {"wl" : 0.0 , "wm" :  0.55 , "wu" :  1.55, "g" :  1.16, "ts" : 4.75 * ms, "tau_x" : 665.16 * ms, "tau_u" : 19.12 * ms, "U" : 0.37, "p" : 0.016, "delayer" : 1, "tau_uInv" : (1000/19.12) * Hz, "tau_xInv" : (1000/665.16) * Hz, "tsInv" : (1000/4.75) * Hz},
    (num_group[5], num_group[3]) : {"wl" : 0.0 , "wm" :  1.0 , "wu" :  2.0, "g" :  1.54, "ts" : 5.40 * ms, "tau_x" : 614.01 * ms, "tau_u" : 20.98 * ms, "U" : 0.23, "p" : 0.011, "delayer" : 1, "tau_uInv" : (1000/20.98) * Hz, "tau_xInv" : (1000/614.01) * Hz, "tsInv" : (1000/5.40) * Hz},
    (num_group[5], num_group[4]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  2.16, "ts" : 6.24 * ms, "tau_x" : 660.48 * ms, "tau_u" : 22.69 * ms, "U" : 0.17, "p" : 0.017, "delayer" : 1, "tau_uInv" : (1000/22.69) * Hz, "tau_xInv" : (1000/660.48) * Hz, "tsInv" : (1000/6.24) * Hz},
    (num_group[5], num_group[5]) : {"wl" : 0.0 , "wm" :  0.65 , "wu" :  1.65, "g" :  1.34, "ts" : 5.51 * ms, "tau_x" : 675.54 * ms, "tau_u" : 17.72 * ms, "U" : 0.31, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/17.72) * Hz, "tau_xInv" : (1000/675.54) * Hz, "tsInv" : (1000/5.51) * Hz},
    (num_group[5], num_group[6]) : {"wl" : 0.0 , "wm" :  0.75 , "wu" :  1.75, "g" :  1.27, "ts" : 6.96 * ms, "tau_x" : 578.90 * ms, "tau_u" : 28.45 * ms, "U" : 0.30, "p" : 0.017, "delayer" : 1, "tau_uInv" : (1000/28.45) * Hz, "tau_xInv" : (1000/578.90) * Hz, "tsInv" : (1000/6.96) * Hz},
    (num_group[5], num_group[7]) : {"wl" : 0.0 , "wm" :  0.75 , "wu" :  1.75, "g" :  1.18, "ts" : 6.89 * ms, "tau_x" : 563.47 * ms, "tau_u" : 26.15 * ms, "U" : 0.30, "p" : 0.002, "delayer" : 1, "tau_uInv" : (1000/26.15) * Hz, "tau_xInv" : (1000/563.47) * Hz, "tsInv" : (1000/6.89) * Hz},
    (num_group[6], num_group[0]) : {"wl" : 0.0 , "wm" :  1.45 , "wu" :  2.45, "g" :  1.97, "ts" : 7.15 * ms, "tau_x" : 496.05 * ms, "tau_u" : 20.62 * ms, "U" : 0.12, "p" : 0.042, "delayer" : 1, "tau_uInv" : (1000/20.62) * Hz, "tau_xInv" : (1000/496.05) * Hz, "tsInv" : (1000/7.15) * Hz},
    (num_group[6], num_group[1]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  2.12, "ts" : 4.55 * ms, "tau_x" : 762.60 * ms, "tau_u" : 21.45 * ms, "U" : 0.16, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/21.45) * Hz, "tau_xInv" : (1000/762.60) * Hz, "tsInv" : (1000/4.55) * Hz},
    (num_group[6], num_group[2]) : {"wl" : 0.0 , "wm" :  0.55 , "wu" :  1.55, "g" :  1.08, "ts" : 3.90 * ms, "tau_x" : 759.12 * ms, "tau_u" : 15.70 * ms, "U" : 0.36, "p" : 0.007, "delayer" : 1, "tau_uInv" : (1000/15.70) * Hz, "tau_xInv" : (1000/759.12) * Hz, "tsInv" : (1000/3.90) * Hz},
    (num_group[6], num_group[3]) : {"wl" : 0.0 , "wm" :  1.0 , "wu" :  2.0, "g" :  1.42, "ts" : 4.32 * ms, "tau_x" : 693.92 * ms, "tau_u" : 17.08 * ms, "U" : 0.22, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/17.08) * Hz, "tau_xInv" : (1000/693.92) * Hz, "tsInv" : (1000/4.32) * Hz},
    (num_group[6], num_group[4]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  2.00, "ts" : 4.96 * ms, "tau_x" : 776.57 * ms, "tau_u" : 17.27 * ms, "U" : 0.17, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/17.27) * Hz, "tau_xInv" : (1000/776.57) * Hz, "tsInv" : (1000/4.96) * Hz},
    (num_group[6], num_group[5]) : {"wl" : 0.0 , "wm" :  0.65 , "wu" :  1.65, "g" :  1.35, "ts" : 5.39 * ms, "tau_x" : 712.27 * ms, "tau_u" : 21.22 * ms, "U" : 0.30, "p" : 0.003, "delayer" : 1, "tau_uInv" : (1000/21.22) * Hz, "tau_xInv" : (1000/712.27) * Hz, "tsInv" : (1000/5.39) * Hz},
    (num_group[6], num_group[6]) : {"wl" : 0.0 , "wm" :  0.75 , "wu" :  1.75, "g" :  1.16, "ts" : 5.53 * ms, "tau_x" : 642.10 * ms, "tau_u" : 22.52 * ms, "U" : 0.29, "p" : 0.002, "delayer" : 1, "tau_uInv" : (1000/22.52) * Hz, "tau_xInv" : (1000/642.10) * Hz, "tsInv" : (1000/5.53) * Hz},
    (num_group[6], num_group[7]) : {"wl" : 0.0 , "wm" :  0.75 , "wu" :  1.75, "g" :  1.10, "ts" : 5.52 * ms, "tau_x" : 637.95 * ms, "tau_u" : 21.01 * ms, "U" : 0.29, "p" : 0.004, "delayer" : 1, "tau_uInv" : (1000/21.01) * Hz, "tau_xInv" : (1000/637.956) * Hz, "tsInv" : (1000/5.52) * Hz},
    (num_group[7], num_group[0]) : {"wl" : 0.0 , "wm" :  1.45 , "wu" :  2.45, "g" :  1.72, "ts" : 9.11 * ms, "tau_x" : 382.14 * ms, "tau_u" : 24.79 * ms, "U" : 0.11, "p" : 0.119, "delayer" : 1, "tau_uInv" : (1000/24.79) * Hz, "tau_xInv" : (1000/382.14) * Hz, "tsInv" : (1000/9.11) * Hz},
    (num_group[7], num_group[1]) : {"wl" : 0.0 , "wm" :  1.3 , "wu" :  2.3, "g" :  1.91, "ts" : 5.17 * ms, "tau_x" : 635.01 * ms, "tau_u" : 22.34 * ms, "U" : 0.15, "p" : 0.005, "delayer" : 1, "tau_uInv" : (1000/22.34) * Hz, "tau_xInv" : (1000/635.01) * Hz, "tsInv" : (1000/5.17) * Hz},
    (num_group[7], num_group[2]) : {"wl" : 0.0 , "wm" :  0.55 , "wu" :  1.55, "g" :  1.00, "ts" : 4.29 * ms, "tau_x" : 663.25 * ms, "tau_u" : 16.42 * ms, "U" : 0.34, "p" : 0.067, "delayer" : 1, "tau_uInv" : (1000/16.42) * Hz, "tau_xInv" : (1000/663.25) * Hz, "tsInv" : (1000/4.29) * Hz},
    (num_group[7], num_group[3]) : {"wl" : 0.0 , "wm" :  1.0 , "wu" :  2.0, "g" :  1.31, "ts" : 4.83 * ms, "tau_x" : 596.50 * ms, "tau_u" : 17.78 * ms, "U" : 0.21, "p" : 0.050, "delayer" : 1, "tau_uInv" : (1000/17.78) * Hz, "tau_xInv" : (1000/596.50) * Hz, "tsInv" : (1000/4.83) * Hz},
    (num_group[8], num_group[0]) : {"wl" : 0.0,  "wm" : 1.0 , "wu" :  2.0, "g" :  1.83, "ts" : 7.43 * ms, "tau_x" : 347.44 * ms, "tau_u" : 42.00 * ms, "U" : 0.16, "p" : 0.002, "delayer" : 10, "tau_uInv" : (1000/42) * Hz, "tau_xInv" : (1000/347.44) * Hz, "tsInv" : (1000/7.43) * Hz},
    (num_group[8], num_group[1]) : {"wl" : 0.0 , "wm" :  (1/0.16) , "wu" :  (1 + (0.16)), "g" :  1.94, "ts" : 4.13 * ms, "tau_x" : 702.40 * ms, "tau_u" : 36.83 * ms, "U" : 0.17, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/36.83) * Hz, "tau_xInv" : (1000/702.40) * Hz, "tsInv" : (1000/4.13) * Hz},
    (num_group[8], num_group[2]) : {"wl" : 0.0 , "wm" :  (1/0.17) , "wu" :  (1 + (0.17)), "g" :  1.50, "ts" : 3.58 * ms, "tau_x" : 744.66 * ms, "tau_u" : 30.29 * ms, "U" : 0.19, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/30.29) * Hz, "tau_xInv" : (1000/744.66) * Hz, "tsInv" : (1000/3.58) * Hz},
    (num_group[8], num_group[3]) : {"wl" : 0.0 , "wm" :  (1/0.19) , "wu" :  (1 + (0.19)), "g" :  1.57, "ts" : 3.46 * ms, "tau_x" : 658.70 * ms, "tau_u" : 25.92 * ms, "U" : 0.21, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/25.92) * Hz, "tau_xInv" : (1000/658.70) * Hz, "tsInv" : (1000/3.46) * Hz},
    (num_group[8], num_group[4]) : {"wl" : 0.0 , "wm" :  (1/0.21) , "wu" :  (1 + (0.21)), "g" :  1.91, "ts" : 4.39 * ms, "tau_x" : 594.72 * ms, "tau_u" : 38.34 * ms, "U" : 0.19, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/38.34) * Hz, "tau_xInv" : (1000/594.72) * Hz, "tsInv" : (1000/4.39) * Hz},
    (num_group[8], num_group[5]) : {"wl" : 0.0 , "wm" :  (1/0.19) , "wu" :  (1 + (0.19)), "g" :  2.27, "ts" : 3.75 * ms, "tau_x" : 764.65 * ms, "tau_u" : 21.48 * ms, "U" : 0.18, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/21.48) * Hz, "tau_xInv" : (1000/764.55) * Hz, "tsInv" : (1000/3.75) * Hz},
    (num_group[8], num_group[6]) : {"wl" : 0.0 , "wm" :  (1/0.18) , "wu" :  (1 + (0.18)), "g" :  1.97, "ts" : 4.87 * ms, "tau_x" : 435.65 * ms, "tau_u" : 48.65 * ms, "U" : 0.19, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/48.65) * Hz, "tau_xInv" : (1000/435.65) * Hz, "tsInv" : (1000/4.87) * Hz},
    (num_group[8], num_group[7]) : {"wl" : 0.0 , "wm" :  (1/0.19) , "wu" :  (1 + (0.19)), "g" :  1.88, "ts" : 4.96 * ms, "tau_x" : 464.59 * ms, "tau_u" : 43.06 * ms, "U" : 0.18, "p" : 0.001, "delayer" : 10, "tau_uInv" : (1000/43.06) * Hz, "tau_xInv" : (1000/464.59) * Hz, "tsInv" : (1000/4.96) * Hz}
}

def _get_Synaptic_Params():
  return synaptic_params
uxhwModel = '''
  du_syn/dt = -u_syn/tau_u: 1 (clock-driven)
  dx/dt = (1-x)/tau_x : 1 (clock-driven)
  w : 1 (constant)
  u_plus : 1 (constant)
  dIsyn/dt =  -Isyn/ts : 1 (clock-driven)
  '''
def group_synapse():
  num_synapse = []
  for (ng_pre, ng_post), ng_params in synaptic_params.items():
    #Isyn_Pyramidal
    # if ng_pre.name == 'Basket' or ng_pre.name == 'BasketCCK' or ng_pre.name == 'Bistratified' or ng_pre.name == 'Ivy' or ng_pre.name == 'QuadDLM':
    updater = f'Isyn_{ng_pre.name}_post = Isyn * pA : amp (summed)'
    model = uxhwModel + updater
    y = (np.random.rand() * (ng_params.get('delayer') - 1) + 1)
    s = Synapses(ng_pre, ng_post, model = model, method = 'euler',
                   on_pre = {'pre_1' : 'u_syn += U * (1 - u_syn)',
                             'pre_2' : 'u_plus = u_syn',
                             'pre_3' : 'Isyn += (1/U) * u_plus * x',
                             'pre_4' : 'x -= u_plus * x'}, on_event ={'pre_1' : 'spike', 'pre_2' : 'spike', 'pre_3' : 'spike', 'pre_4' : 'spike'},
                             delay = {'pre_1' : y * ms,
                         'pre_2' :  y * ms,
                          'pre_3' : y * ms,
                          'pre_4' : y * ms,
                          },
                     name = f"{ng_pre.name}_{ng_post.name}_synapse", namespace = ng_params)
    s.connect(condition = 'i!=j', p = ng_params.get('p'))
    s.w = ng_params.get('p')
    s.x = 1
    s.u_syn = 0
    num_synapse.append(s)
  return num_synapse, num_group

def create_synapse():
  num_group = _create_Neurons()
  num_Synapse, num_group = group_synapse()
  return num_Synapse, num_group
The neuron code in case it is necessary
from brian2 import *
from SynapticParamsDict import _get_Synaptic_Params

mOhm = 1 * 10 ** -3 * ohm
pOhm = 1e-12 * ohm
neuron_params = {
 ("Pyramidal") : {"k" : 0.792  * pA/mV, "a" : 0.008  * 1/ms, "b" : -42.552  * pA/mV, "d" : 588 * pA, "c" : 366 * pF, "vr" : -63.204 * mV, "vt" : -33.604 * mV, "vmin" : -38.868 * mV, "vpeak" : 35.861 * mV, "Iext" : 10 * pA},
 ("AxoAxonic") : {"k" : 3.961  * pA/mV, "a" : 0.005  * 1/ms, "b" : 8.684  * pA/mV, "d" : 15 * pA, "c" : 165 * pF, "vr" : -57.100 * mV, "vt" : -51.719 * mV, "vmin" : -73.969 * mV, "vpeak" : 27.799 * mV, "Iext" : 10 * pA},
 ("Basket") : {"k" : 0.995  * pA/mV, "a" : 0.004  * 1/ms, "b" : 9.264  * pA/mV, "d" : -6 * pA, "c" : 45 * pF, "vr" : -57.506 * mV, "vt" : -23.379 * mV, "vmin" : -47.556 * mV, "vpeak" : 18.455 * mV, "Iext" : 10 * pA},
 ("BasketCCK") : {"k" : 0.583  * pA/mV, "a" : 0.006  * 1/ms, "b" : -1.245  * pA/mV, "d" : 54 * pA, "c" : 135 * pF, "vr" : -58.997 * mV, "vt" : -39.398 * mV, "vmin" : -42.771 * mV, "vpeak" : 18.275 * mV, "Iext" : 10 * pA},
 ("Bistratified") : {"k" : 3.935  * pA/mV, "a" : 0.002  * 1/ms, "b" : 16.580  * pA/mV, "d" : 19 * pA, "c" : 107 * pF, "vr" : -64.673 * mV, "vt" : -58.744 * mV, "vmin" : -59.703 * mV, "vpeak" : -9.929 * mV, "Iext" : 10 * pA},
 ("Ivy") : {"k" : 1.916  * pA/mV, "a" : 0.009  * 1/ms, "b" : 1.908  * pA/mV, "d" : 45 * pA, "c" : 364 * pF, "vr" : -70.435 * mV, "vt" : -40.859 * mV, "vmin" : -53.400 * mV, "vpeak" : -6.920 * mV, "Iext" : 10 * pA},
 ("MFAOrden") : {"k" : 1.380 * pA/mV, "a" : 0.008  * 1/ms, "b" : 12.933  * pA/mV, "d" : 0 * pA, "c" : 209 * pF, "vr" : -57.076 * mV, "vt" : -39.102 * mV, "vmin" : -40.681 * mV, "vpeak" : 16.313 * mV, "Iext" : 10 * pA},
 ("QuadDLM") : {"k" : 1.776 * pA/mV, "a" : 0.006  * 1/ms, "b" : -3.449  * pA/mV, "d" : 52 * pA, "c" : 186 * pF, "vr" : -73.482 * mV, "vt" : -54.937 * mV, "vmin" : -64.404 * mV, "vpeak" : 7.066 * mV, "Iext" : 10 * pA},
 ("Granule") : {"k" : 0.45 * pA/mV, "a" : 0.003  * 1/ms, "b" : 24.48  * pA/mV, "d" : 50 * pA, "c" : 38 * pF, "vr" : -77.4 * mV, "vt" : -44.9 * mV, "vmin" : -66.47 * mV, "vpeak" : 15.49 * mV, "Iext" : 10 * pA},
}

eqs = '''
  dv/dt = ((k * (v-vr) * (v-vt) * (1/mV) - u + I)/c) : volt
  du/dt = a * (b * (v-vr) - u) : amp
'''
pop_names = ["Pyramidal","AxoAxonic","Basket","BasketCCK","Bistratified","Ivy","MFAOrden","QuadDLM","Granule"]
pop_sizes = [74366, 1909, 515, 665, 4631, 2334, 1526, 3280, 394502]
pop_sizes = np.array(pop_sizes)
#pop_sizes = (np.array(pop_sizes)/515 * 20).astype(int)
IsynVals = []
def _fill_IsynVals():
  dictOfNeuronNames = _get_Synaptic_Params()
  Icurrstring = '''I = (Isyn_Pyramidal + Isyn_AxoAxonic + Isyn_Basket + Isyn_BasketCCK + Isyn_Bistratified + Isyn_Ivy + Isyn_MFAOrden
    + Isyn_QuadDLM + Isyn_Granule + Iext) : amp'''
  IcurrInit = '''Isyn_Pyramidal : amp \n Isyn_AxoAxonic : amp \n Isyn_Basket : amp \n Isyn_BasketCCK : amp \n Isyn_Bistratified : amp \n Isyn_Ivy : amp \n Isyn_MFAOrden : amp \n Isyn_QuadDLM : amp \n Isyn_Granule : amp'''
num_group = []
def _create_Neurons():
  for i, val in enumerate(neuron_params.items()):
    eqs_list =  eqs + "\n" + _fill_IsynVals()
    neu = NeuronGroup(pop_sizes[i], eqs_list, threshold = 'v >= vpeak',
        reset = "v = vmin; u += d", method = 'rk4', namespace = val[1], name = val[0],
         refractory = '2 * ms')
    x = val[1]['vr']
    neu.v = x
    neu.u = 0 * pA
    num_group.append(neu)
  return num_group
def getNeuronParams():
  return neuron_params

Let me know if there is anything else that I can post to make it easier.

Many thanks @spoma33, this is very helpful. I was able to successfully run your code (after removing a few unused imports and correcting a missing return value of the _fill_IsynVals function), and I can reproduce your error. I will have a closer look later today and get back to you then.

Hi @spoma33. So, it wasn’t obvious at first, but I found the reason – it actually turned out that having your code was crucial. The problem is not in brian2cuda (you’d get the same error with cpp_standalone mode), but in your code flow. In your main script, you import

import create_Neurons
import CreateSynapse

before importing the rest and setting set_device('cuda_standalone'). The code of CreateSynapse creates the neurons in the main module code:

num_group = _create_Neurons()

This means that at the beginning of your script, you create all the neurons (the synapses are created later via an explicit function call) before you switch the device to cuda_standalone. This means, that the internal bookkeeping of objects in the cuda_standalone device is not aware that these NeuronGroups were created, and things therefore fail when you try to create the Synapses.

A quick&dirty solution would be to move

from brian2 import *
import brian2cuda
set_device('cuda_standalone')

to the very start of your script. But in general, I’d suggest to not have the neuron creation as a “side effect” of your module import, but instead do it in a function call. To keep the general structure of your code, you could remove the num_group = _create_Neurons() call from the beginning of your file, and then use

synaptic_params = {
    ("Pyramidal", "Pyramidal") : {"wl" : 0.0, "wm" :  0.55, "wu" :  1.55, "g": 0.3, "ts": 10.22 * ms, "tau_x" : 318.51 * ms, "tau_u": 21.45 * ms, "U": 0.28, "p" : 0.025, "delayer" : 2, "tau_uInv" : (1000/21.45) * Hz, "tau_xInv" : (1000/318.51) * Hz, "tsInv" : (1000/10.22) * Hz},
...

i.e., the group names instead of the objects. In _create_neurons, you’d return a dictionary instead of a list:

 groups = {}
 for i, val in enumerate(neuron_params.items()):
    eqs_list =  eqs + "\n" + _fill_IsynVals()
    neu = NeuronGroup(pop_sizes[i], eqs_list, threshold = 'v >= vpeak',
        reset = "v = vmin; u += d", method = 'rk4', namespace = val[1], name = val[0],
         refractory = '2 * ms')
    x = val[1]['vr']
    neu.v = x
    neu.u = 0 * pA
    groups[val[0]] = neu
return groups

change your create_synapse function to use this dictionary:

def create_synapse():
  groups = _create_Neurons()
  num_Synapse  = group_synapse(groups)
  return num_Synapse, groups

with the adapted group_synapse function:

def group_synapse(groups):
  num_synapse = []
  for (ng_pre_name, ng_post_name), ng_params in synaptic_params.items():
    ng_pre = groups[ng_pre_name]
    ng_post = groups[ng_post_name]
    ...
    s = Synapses(ng_pre, ng_post, model = model, method = 'euler',
                     ...
                     name = f"{ng_pre_name}_{ng_post_name}_synapse", namespace = ng_params)
    ...
    num_synapse.append(s)
  return num_synapse

And in your main file you’d use something like

num_synapse, num_group = CreateSynapse.create_synapses()

net.add(list(num_group.values()))
...

Hope that makes sense, please let me know if anything is still unclear.

PS: This is a big network, I hope your GPU has plenty of memory :hushed: