Description of problem
I’ve been asked to build and setup a new system to run brain 2 simulations with the request to get “as many cores as possible within the budget”.
But, one CPU isn’t like the other, which is proven by this topic: Benchmarking Brian2 on M1
I ran the benchmark (GitHub - oleksii-leonov/brian2-benchmark: brian2 package benchmarking) on my own system (docker on windows)
i9-10900X (CPU temp avg 50C max 58C, CPU usage avg 58% max 59%, possibly there is room for improvement) but the results were worse compared to the
The M1/2 chip seems to do better compared to intel and since M1/2 are based on ARM, I assume the AMD chips will do (allot) better than intel as well?
Can anyone confirm if there is a performance boost on AMD compared to intel?
Furthermore, is there any recommendation with regards to the memory? Does this package require fast memory, or allot of it, or both, neither?
From what I understand, the current usercase is that Brain 2 is running secluded on all CPU cores on multiple systems, each with different parameters. The current systems have GPU’s in them, but aren’t being used for this usercase.
Any information is appreciated!
@JBtje, welcome to the Brian community.
The Benchmarks you tried were written by me, and I’m pretty surprised that you got only 58-59% CPU usage. These benchmarks use about 100% CPU independently inside or outside docker on my Linux. So, I guess there is some limit in your docker settings. In theory, you should see the full load of all CPUs.
I don’t have a chance to run these tests on AMD CPUs. I guess they may be a bit better, but it is hard to predict.
M1/M2 indeed show the best performance, but my feeling that performance should also depend on the complexity of your model. Oleksii tested this on Mac laptops/Mac-mini. In all these cases, memory is directly built into the system (on the level of the CPU or soldered on the main board), which can provide much faster access.
Memory usage varies from model to model. So, be sure you order plenty of memory (the more, the better) and with the highest speed possible.
For GPU acceleration, there is a special project Brian2CUDA, which I never tried (but probably should:smiling_face:), so I can’t help you here.
I am not quite sure that I understand this reasoning – AMD chips are not based on ARM, they should give very similar performance to Intel CPUs. For simulations that make good of multithreading, AMD might give a bit better performance for the same price, but I haven’t looked into such comparisons lately. On the other hand, as @rth mentioned, there is Brian2CUDA to run simulations on the GPU, which can give you order of magnitude faster speed for certain classes of networks (to a first approximation: big networks that aren’t too big to fit on the GPU). See the Brian2CUDA paper for benchmarks. So instead of getting “as many cores as possible”, you might instead use the money to buy a powerful GPU. But note that many of the benchmarks in the paper were run with an NVidia A100 GPU, which costs > 10000$
But if the main use case are parameter explorations that run several simulations in parallel, then indeed the GPU wouldn’t help (but it might at some point, when GitHub - denisalevi/brian2-network-multiplier: Run multiple versions of a `brian2.network` in a single simulation. becomes mature).
Please also note that the compiler version that is used for compilation can make quite a difference as well. For intel chips, you also have the option to use the Intel compiler, which can sometimes optimize things a bit further (but it is a bit unpredictable whether it gives a performance boost or not).
Well, as so often for performance questions, the answer to your question seems to be: it depends
Thank you very much for the replies!
I’ll do some more testing on my current system to see if i can get the 100% CPU usage. As well as some other systems to benchmark.
I am not quite sure that I understand this reasoning
Same, not sure where the idea came from that AMD is ARM based… Must have been late i guess.