Nonconvex.jl icon indicating copy to clipboard operation
Nonconvex.jl copied to clipboard

Toolbox for gradient-based and derivative-free non-convex constrained optimization with continuous and/or discrete variables.

Results 40 Nonconvex.jl issues
Sort by recently updated
recently updated
newest added

It would be great to be able to have a random coordinate descent version of any solver such that it optimises only some of the parameters at any one time....

If we can do iterative separable approximation of nonlinear functions, e.g. like in MMA, we can use https://github.com/JuliaFirstOrder/SeparableOptimization.jl to solve the sub-problem in a sequential optimization algorithm.

`optimize` in Nonconvex will conflict with `optimize` in NLopt. Better resolve it when import both

Better first warn in `optimize` or somewhere else if haven't specify a bound for `Model`, there will be error.

FrankWolfe is a nice package that can handle structured constraints and unstructured objectives. We can start by supporting it when the constraints are are all linear.

It would be nice to re-think the augmented Lagrangian algorithm by only relaxing constraints that have a `:relax` flag on them. Then users can choose the sub-algorithm that matches the...

documentation
enhancement

We should define a probability function over a measurement from Measurements.jl that calculates differentiable probability assuming normality. Then uncertainty can be propagated from the data to the objective or constraints...

If we support constraints that return an interval from IntervalArithmetic.jl, robust optimization using interval uncertainty sets will be trivial to support.

According to 4.2.1 in https://www.csd.uwo.ca/~dlizotte/publications/lizotte_phd_thesis.pdf, we can define a joint GP for the function value and its gradient. This seems to improve convergence significantly by providing more accurate gradients.