Hi Brian Community,
My name is Ayush, and I am a researcher working on Physics-Constrained Neural Networks and Spiking Dynamics. I have been following Brian2’s development and am interested in contributing to the C++/Cython backend optimizations for GSoC 2026.
I have set up the dev environment on Windows and started profiling Synapses.connect() behavior. I noticed some interesting caching artifacts where scaling the neuron count triggers unexpected re-compilation spikes.
As shown in the attached benchmark:
-
1k neurons: ~0.3s (Expected)
-
5k neurons: ~0.02s (Cached/Fast)
-
10k neurons: Spikes to ~11.8s (Unexpected Re-compilation)
-
20k neurons: Drops back to ~0.2s
Given my background in graph modeling and numerical methods, I am looking to work on issues related to backend performance or codegen optimization.
Could you point me toward any specific “pain points” in the current C++ standalone mode that you’d prioritize for the next cycle?
Best, Ayush
