I’m super excited at the moment about Friedemann Zenke’s work on “surrogate gradient descent”. It’s a neat trick that lets you train SNNs with gradient descent that gets around the problem that gradients are zero almost everywhere in SNNs. It can give some pretty amazing results. Check out their paper and tutorial.
Unfortunately, there’s no support for this at the moment in Brian but we’re working on it!