nlopt icon indicating copy to clipboard operation
nlopt copied to clipboard

Parallel Optimization

Open jbehura opened this issue 7 years ago • 11 comments

Hi, I could not locate any reference to parallel computation of objective functions in the documentation. Are all the optimization codes serial? Thanks, Jyoti

jbehura avatar Feb 21 '18 18:02 jbehura

Funny I had this discussion recently. The callback to interface the objective function operates on one point at a time for now, but maybe a new callback that operates on several points could take advantage of some algorithms, maybe the evolutionary ones, but I've not looked into it, can you think of one ?

jschueller avatar Feb 21 '18 18:02 jschueller

Most of the global optimization algorithms should be good candidates...and yes evolutionary algorithms should be great candidates. I believe there are two ways of doing it --

  1. Parallelize the computation of the objective function for one point...that way one can make it algorithm independent
  2. Parallelize the computation of objective functions for multiple points...this will be algorithm dependent

jbehura avatar Feb 21 '18 18:02 jbehura

  1. this would not be the responsibility of nlopt, this can already be done on the user side to
  2. The way I see it Nlopt cannot be responsible of the parallelization in itself but provide input points by bulk, then it's the user fonction to parallelize the evaluation of the multiple points and return the result to nlopt.

jschueller avatar Feb 21 '18 18:02 jschueller

  1. Agreed. In the first case, the user should parallelize each function evaluation.
  2. In the 2nd case, however, wherever multiple independent function calls are possible, modifications to NLOPT will significantly add to the efficiency of the algorithms.

jbehura avatar Feb 21 '18 18:02 jbehura

  1. I'm not saying that parallelization is not possible. I'm just saying NLopt cannot assume thread-safety of the function, that's why I think this should be delegated to the user callback on multiple points.

jschueller avatar Feb 21 '18 18:02 jschueller

Hi! I was wondering if there is any update on this topic. I have the ability to calculate the objective function for my application in batches of 20 (using cuda), but unfortunately have not figured out how to leverage this parallel benefit for optimization. I'm currently using an "LN" method, but would be ok using a "GN" method if it leveraged the 20x speedup. Ideal interface would be 20 points to calculate, then the next 20 points to calculate based on the results from the first 20 etc.. Any help would be greatly appreciated or if there is another library to try!

scottfharvey-cm avatar Jul 23 '19 22:07 scottfharvey-cm

@scottfharvey-cm I think no one is working on it, else I think pagmo does parallel optimization.

jschueller avatar Jul 24 '19 06:07 jschueller

There is still no update, right? @jschueller

ismetdagli avatar Apr 08 '20 15:04 ismetdagli

nope

jschueller avatar Apr 08 '20 15:04 jschueller

try pagmo

jschueller avatar Apr 08 '20 15:04 jschueller

  1. this would not be the responsibility of nlopt, this can already be done on the user side to
  2. The way I see it Nlopt cannot be responsible of the parallelization in itself but provide input points by bulk, then it's the user fonction to parallelize the evaluation of the multiple points and return the result to nlopt.

At least, NLopt should provide an API interface with multiple x inputs and multiple fval outputs. This case is algorithm-dependant and is also possible to be parallelized by users. Without such an API interface, I cannot figure out in which case can users parallelize the computation of the objective function, sinice it has been wrappered into NLopt itself.

gitouni avatar Jun 08 '23 01:06 gitouni