Library of functions for Brian equations

Hi everyone,
This isn’t a classic support request, sorry, but I couldn’t find a good category for my question.

With years of Brian/Brian2 usage, I accumulated a bunch of functions that I really would like to have inside equations. Moreover, I want them on all devices cpp, numpy, cython, etc. So is there any template for such a library?

What will be very handy is to have something like this:

from brian2 import *
from brian2pls import fe

n = NeuronGroup(..., model ="dr/dt = fe(r)/tau : 1" ....)

It seems decorator @implementation is the best way to go. So a fe function may look like this:

@implementation('cython', f'''
     cdef double fe(double x):
         return {ge}*(x-{thetae}) if x-{thetae} > 0. else 0.
     ''')
@implementation('cpp',f''''
     double fe(double x)
     {{
         return ((x-{thetae})<0.)?0.:{ge}*(x-{thetae});
     }}''')
@check_units(x=1, result=1)
def fe(x):
     return  ge*(x-thetae) if x-thetae > 0. else 0.

This seems (I’m really not sure) works for both cpp, standalone, omp device, and cython device. It isn’t completely clear to me how to implements it for numpy device, and do we have cuda/gpu device or not?

Any thoughts, suggestions, and examples are highly appreciated.

P.S. As we discussed with @mstimberg before, many of my functions can be written as a combination of brian2’s equation functions, but it is too messy. On the other hand, I want to have full control over cpp/cython implementation, to be sure that it is the most optimal one.

I personally don’t mind having it here in this category, otherwise #feature-requests might be a good fit?

With your specific example, I’d probably rather write it in the equations as clip(ge*(x - thetae), 0, inf), and then ideally we’d have “macros” in the equations (Macro definitions in equations · Issue #378 · brian-team/brian2 · GitHub) so you could define it once and then refer to it under some name in multiple places. But I get the general idea, and we made the function system extensible to support use cases where you want to have detailed control about the implementation. If you want to generate functions in a generic way, then it would probably be more straightforward to not use the decorators but to create a Function object yourself. E.g. your example above would look like this:

def make_fe(ge, thetae):
    # Python implementation
    def fe(x):
        # operates on an array of values
        return np.where(x - thetae > 0, ge * (x - thetae), 0.)

    # Cython implementation
    cython_code = f'''
     cdef double fe(double x):
         return {ge}*(x-{thetae}) if x-{thetae} > 0. else 0.
     '''

    # C++ implementation
    cpp_code = f'''
     double fe(double x)
     {{
         return ((x-{thetae})<0.)?0.:{ge}*(x-{thetae});
     }}'''
    func = Function(fe, arg_units=[1], arg_names=['x'], return_unit=1)
    func.implementations.add_implementation('cython', cython_code)
    func.implementations.add_implementation('cpp', cpp_code)
    return func

fe = make_fe(5., 0.1)

Yes, your example will work for C++ standalone with or without OpenMP and for Cython. For numpy you have to be aware that functions operate on arrays and not on single values as the other targets (see Functions — Brian 2 2.5.4 documentation).

The brian2cuda project is still in alpha, but in principle could be used here I guess. You can also run code on the GPU via the GeNN simulator with brian2genn and you can define functions for it. You only have to be aware that you prefix your function header with SUPPORT_CODE_FUNC (GeNN: Neuron models), so that GeNN can generate both code for CPU and GPU. See https://github.com/brian-team/brian2genn/blob/master/brian2genn/genn_generator.py for examples of function definitions in Brian2GeNN.

1 Like

@mstimberg thank you for explanations, very useful.

A side question, I have 99% of my functions implemented as C/C++ macros in a header file. Very useful, when you code in C/C++/Cython. How can I add #include statement in cpp implementation? Well now I realized this isn’t a great idea at all. I will have to include h file into the module :roll_eyes:

You can ask Brian to include additional header files: Functions — Brian 2 2.5.4 documentation If this header file implements a function as a macro, you might not have to implement the function at all, you can simply state that the implementation exists (with a different name potentially).

Not quite sure that I understand. You mean that you’d have to package it with the Python module?

Blockquote Not quite sure that I understand. You mean that you’d have to package it with the Python module?

Yes, but I thought to have a simple, ‘single file’ module, not a whole python’s module package.

BTW, Can one Function() object calls another Function() object inside? Will it work for all devices?

fe = Function(....)
fi  = Function(which_uses_fe)

It seems, it should work, but I’ve tested that yet.

Ah! @mstimberg I found everything in documentation. Thank you!

You will have to declare it in the implementation’s dependencies, see e.g. how the poisson function depends on the rand function: https://github.com/brian-team/brian2/blob/master/brian2/codegen/generators/cpp_generator.py

Ah, saw this too late :grinning:

Well now I hit a wall :confused:

Here is a simple example, and none of cython or cpp works.

# brian2pls.py
from brian2 import *

def _P1(X,X0):
    return X0-X
P1 = Function(_P1,arg_units=[1,1], return_unit=1)
P1.implementations.add_implementation('cpp','#define P1(X,X0)       ((X0)-(X))')
P1.implementations.add_implementation('cython','''
cdef double P1(double X, double X0):
    return (X0-X)
''')
def _P2(X,X0,X1):
    return P1(X,X0)*P1(X,X1)
P2 = Function(_P2,arg_units=[1,1,1], return_unit=1)
P2.implementations.add_implementation('cpp','#define P2(X,X0,X1)    (P1((X),(X0))*P1((X),(X1)))',dependencies={'P1':P1})
P2.implementations.add_implementation('cython','''
cdef double P2(double X, double X0, double X1):
    return P1(X,X0)*P1(X,X1)
''',dependencies={'P1':P1})

and usage"

from brian2 import *
from brian2pls import *

set_device('cpp_standalone', directory='CODE')
prefs.devices.cpp_standalone.openmp_threads = 1

equ="""
dv/dt = (P2(v,-50,-30) + I )/ms : 1
I : 1
"""

n = NeuronGroup(1,equ,threshold="v>0",reset="v=-80",refractory=1*ms)
r = StateMonitor(n,'v',record=True)
n.v=-70.
n.I=0

run(200*ms)

plot(r.t/ms,r.v[0])
show()

Cython generator (default) returns error:

INFO       No numerical integration method specified for group 'neurongroup', using method 'euler' (took 0.01s, trying other methods took 0.01s). [brian2.stateupdaters.base.method_choice]

Error compiling Cython file:
------------------------------------------------------------
...


# support code

cdef double P2(double X, double X0, double X1):
    return _P1(X,X0)*_P1(X,X1)
          ^
------------------------------------------------------------

C++ generator returns

code_objects/neurongroup_stateupdater_codeobject.cpp: In function ‘void _run_neurongroup_stateupdater_codeobject()’:
code_objects/neurongroup_stateupdater_codeobject.cpp:15:29: error: ‘_P1’ was not declared in this scope; did you mean ‘P1’?
   15 |     #define P2(X,X0,X1)    (_P1((X),(X0))*_P1((X),(X1)))
      |                             ^~~
code_objects/neurongroup_stateupdater_codeobject.cpp:125:42: note: in expansion of macro ‘P2’
  125 |         const double _v = (_lio_2 * (I + P2(v, - 50, - 30))) + v;
      |                                          ^~
make: *** [makefile:31: code_objects/neurongroup_stateupdater_codeobject.o] Error 1

What did I mess up?

but decorators work pretty well!

@implementation('cpp', '''
#define P1(X,X0)       ((X0)-(X))
''')
@implementation('cython', '''
cdef double P1(double X, double X0):
    return (X0-X)
''')
@check_units(arg=[1,1], result=1)
def P1(X,X0):
    return X0-X


@implementation('cpp', '''
#define P2(X,X0,X1)    (P1((X),(X0))*P1((X),(X1)))
''',dependencies={'P1':P1})
@implementation('cython', '''
cdef double P2(double X, double X0, double X1):
    return P1(X,X0)*P1(X,X1)
''',dependencies={'P1':P1})
@check_units(arg=[1,1,1], result=1)
def P2(X,X0,X1):
    return P1(X,X0)*P1(X,X1)
from brian2 import *
from brian2pls import *

set_device('cpp_standalone', directory='CODE')
prefs.devices.cpp_standalone.openmp_threads = 1

equ="""
dv/dt = (P2(v,-50,-30)*1e-5 + I )/ms : 1
I : 1
"""

n = NeuronGroup(1,equ,threshold="v>0",reset="v=-80",refractory=1*ms)
r = StateMonitor(n,'v',record=True)
n.v=-70.
n.I=0

run(200*ms)

plot(r.t/ms,r.v[0])
show()

In your first code, you named the Python function _P1 with an underscore, and Brian takes this as the name of the function implementations (it does not parse the Cython/C++ code to figure out the actual names, there). Then, when you use {'P1': P1} as a dependency, it will understand that P1 refers to the function implementation _P1, and renames the function calls in your code (in the error messages, it therefore complains about not finding _P1).

To fix this, you have two options. The first option is to rename your Python functions so that the name matches the name of the functions in the Cython/C++ code. When you then create a Function object of the same name this would overwrite the Python function name, but that’s not a problem. This is actually what happens in your second example where you annotate the P1 function with decorators – P1 is now a Function object. The second option is to add name='P1' and name='P2' to the add_implementation calls for Cython/C++ to tell Brian the name of the functions in the code.

But of course, if the decorator approach is sufficient for you and you prefer it, no need to go the Function.add_implementation route.

1 Like

Thank you, @mstimberg! Yes I realized letter that difference in python function name and in Brian’s function name creates a problem.

I went with decorators, it is much simpler, and I still have the python function with the same name. That is really handy for phase-plan analysis, because it is so simple to reconstruct nullclines in the python code and still have a very fast simulation with the same functions in Brian. I’m not sure that I can call Brian’s function as a regular python function outside of equations statement. Correct me if I’m wrong.

Thank you for your help!

If they have have a Python/numpy implementation, you can, but not if there is only a Cython/C++ implementation. We could actually write a simple Python wrapper in such cases, might be useful for testing etc.
In general, the decorator and the Function / add_implementation approach are mostly the same under the hood (e.g. the implementation decorator simply calls add_implementation on the Function object).

1 Like