nlopt
nlopt copied to clipboard
MMA algorithm tuning parameter
Hi!
I'm using the (wonderful) nlopt library and its MMA algorithm to solve topology optimization problem using the SIMP approach. As I start to master simple problems, I am now trying to implement filtering and projection techniques. However, projection techniques use the smoothed Heavyside function with preferentially a high stepness parameter, a specificity which the MMA algoritm in nlopt seems to do not like, making the optimization oscillating, very slow to converge or even fail.
One solution proposed by Guest et al. (2011) (https://link.springer.com/article/10.1007/s00158-011-0676-1) is to tighten the initial lower and upper asymptotes in the first two iterations by setting the s parameter (see also equation 9 and 11 of the original Svanberg paper of 1987 - https://doi.org/10.1002/nme.1620240207) according to a heuristic rule with s = 0.5/(stepness+1).
Therefore, my question are the following:
- Does nlopt uses this parameter s in the code ? Unfortunately, I was unable by myself to figure out in the file mma.c where it is used.
- Is it possible for nlopt user to provide this s parameter in the Python interface ?
- If not, can we hard code it into the C code (src/algs/mma/mma.c) and then recompile it ?
Regards, Moise
Hi!
In later version of the paper (actually a note of the 2007 updated paper) all the parameters of the optimizer are described ( (https://people.kth.se/~krille/mmagcmma.pdf - end of page 5 ). I think that they only appear in the original matlab implementation and I don't know if they are explicitly present in the nlopt implementation As I'm working with nlopt for a stress-constrained topology optimization formulation, which would benefit a lot from this fine tuning, I would love to be able to modify these parameters (raa0, albefa, move, asyinit, asydecr, asyincr).
I will try to hardcode them directly on mma.c if I manage to find them.
Thanks Steven for the incredible work by the way. Best regards, Enrico
Note that we don't implement the original MMA algorithm, only the CCSA variant from 2001, so the parameters are a bit different.
We could certainly add new algorithm-specific parameters (via the nlopt_set_param
API added in NLopt 2.7).
I do topology optimization in my group as well, and in cases where convergence is problematic (e.g. because the gradients are not accurate enough, or the second derivatives are becoming too large), it is often sufficient to set the inner_maxeval
parameter (typically to 5 or 10).
Hi Steven,
Thanks for your reply. I'm aware that unfortunately the implementations are different.
Actually, in my case (stress-constrained topology optimization) using MMA I'm experiencing convergence problem caused by some step that the optimizer take near the activation of a constraints. This is a well known problem in the literature and usually a solution proposed is to use tight move-limits for the design variables for MMA (see for example this paper ). In https://people.kth.se/~krille/mmagcmma.pdf we discover that we have 3 different parameter in MMA that create a "barrier" for a legal step variable move:
- The lower or the upper bound of the design variable set at the beginning of the optimization
- A
move
parameter that bound the variable in function of the distance of itself from the lower and the upper bound - A
albefa
parameter that bound the variable in function of the distance of itself tho the asymptote
There are additionally three other parameter that can somewhat help to create a tight step variable move; asyinit
, asyincr
and asydecr
. They are all linked to the movement of the asymptotes used to create the convex subproblem.
To your knowledge, there is an easy way to be able to bound the maximum move of a design variable over a step? I would be happy even with a simple numeric bound on the move, without having to implement the other parameters.
PS I've tried to tweak MMA using inner_maxeval
, but unfortunately with no luck
Thank you very much for your help.
@stevengj
Thanks for the tip with inner_maxeval
. From the limited number of test I carried, it seems to work better.
Just a comment about getting algorithm parameter in Python. The method opt.get_param("name", defaultval)
does not change the value of defaultval
, but rather return the value of the parameter. However, we still need to pass the defaultval
which act as a dummy variable. I think it will be more intuitive to change this method to just take the "name"
parameter and return its value.