Hi. I don’t think there are any clear guidelines for chosing these parameters. For the maximum weight, you can do some kind of rough estimation what weights you need. The details depend on your synapse model (delta synapses, current-based, conductance-based, …), but you should be able to calculate what mean weight you’d need to e.g. reproduce the input firing rate in the next layer (if it is too complicated to calculate, you can also do some test simulations where you switch off plasiticity and fix all the synaptic weights to the same value). Then, you could set the maximum weight to some multiple of this. In general, you should also have a look at multiplicative instead of additive STDP (if you are not using it already): it should show better convergence and it also avoids the bimodal weight distributions which you see with additive STDP where synapses end up at either 0 or the maximum value. Assuming that’s not the result you want of course
For the learning rates, I’d use an empirical approach. E.g. start with a rather low learning rate and see whether it looks as if weights would converge. If it does, try e.g. doubling it until weight convergence stops (e.g. weight values start oscillating) and then lower it again. For efficiency reasons, I’d do all these tests on the connections between two layers only – it is a feedforward network, so if it does not work for the first layer, nothing that happens later can change it.
In case you don’t know it already, the following paper is a useful review of various issues around STDP:
Morrison, A., Diesmann, M. & Gerstner, W. Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern 98 , 459–478 (2008). https://doi.org/10.1007/s00422-008-0233-1
All that said, be aware that STDP is good at picking up repeated temporal patterns and similar temporal features, but I am not entirely sure what this will do over several layers. Without some kind of teaching/target signal, all this is unsupervised learning so nothing like e.g. a multi-layer perceptron with back-propagation.