Self-Sustaining Memory in Brian2: 10k Neurons with Strong Recurrent Feedback & Active-Set Optimization

Hi everyone,

I wanted to share some results from a recent project where I’ve been pushing the limits of Brian2 for large-scale, continuous-time simulations.

My goal was to create a Spiking Neural Network that could handle massive, chaotic input loads without seizing (instability) while maintaining long-term memory traces, all running on consumer hardware.

I implemented a custom “Active-Set” simulation method using run_regularly to dynamically partition neurons into “Wake” (ODE-solved) and “Sleep” (linear approximation) states based on their energy topology.

The Results:

  1. Performance: I achieved a ~17x speedup compared to standard full network integration. I successfully ran a 10,000 neuron plastic network with complex cubic dynamics in near real-time (20s wall time for 1500ms simulation) on a standard Linux laptop.

  2. Stability (The “Nuclear” Test): I subjected the network to input loads 500% above saturation. Instead of crashing to NaN or entering runaway epileptic seizure, the network entered a stable “Saturated Latch” state, maintaining coherent activity blocks.

  3. Memory Persistence: By implementing a custom bifurcation-based plasticity rule (modulating the local energy landscape rather than just synaptic weights), the network demonstrates spontaneous meaning acquisition, it effectively locks onto significant stimuli and self-sustains the memory trace even after inputs are cut or actively suppressed.

The attached plots show the “Stress Test” (10k neurons holding state under load) and the “Reset Test” (where the internal recurrent feedback becomes strong enough to resist a -50.0 inhibitory override).

Big thanks to the Brian2 team for the flexibility of the code generation engine, being able to inject custom logic via run_regularly was the key to making this architecture work.

Happy to discuss the optimization strategies used!