libdogleg
libdogleg copied to clipboard
Large-scale nonlinear least-squares optimization library for both sparse and dense problems
=head1 NAME
libdogleg - A general purpose optimizer to solve data fitting problems
=head1 NOTICE
If you're considering using this library for new projects, please look at the
C
C
=head1 DESCRIPTION
This is a library for solving large-scale nonlinear optimization problems. By employing sparse
linear algebra, it is taylored for problems that have weak coupling between the optimization
variables. For appropriately sparse problems this results in I
For smaller problems with dense Jacobians a dense mode is available also. This utilizes the same optimization loop as the sparse code, but uses dense linear algebra.
The main task of this library is to find the vector B
that minimizes
norm2( B
where B ) is a vector that has higher dimensionality than B . The user passes in a
callback function (of type C<dogleg_callback_t>) that takes in the vector B and returns the
vector B . B<J> is a matrix with a row for each
element of I . If B<J> is a sparse matrix, then this library
can take advantage of that, which results in substantial increases in computational efficiency if
most entries of B<J> are 0. B<J> is stored row-first in the callback routine. libdogleg uses a
column-first data representation so it references the transpose of B<J> (called B<Jt>). B<J> stored
row-first is identical to B<Jt> stored column-first; this is purely a naming choice. This library implements Powell's dog-leg algorithm to solve the problem. Like the more-widely-known
Levenberg-Marquardt algorithm, Powell's dog-leg algorithm solves a nonlinear optimization problem by
interpolating between a Gauss-Newton step and a gradient descent step. Improvements over LM are =over =item * a more natural representation of the linearity of the operating point (trust region size vs
a vague lambda term). =item * significant efficiency gains, since a matrix inversion isn't needed to retry a rejected step =back The algorithm is described in many places, originally in M. Powell. A Hybrid Method for Nonlinear Equations. In P. Rabinowitz, editor, Numerical Methods for
Nonlinear Algebraic Equations, pages 87-144. Gordon and Breach Science, London, 1970. Various enhancements to Powell's original method are described in the literature; at this time only
the original algorithm is implemented here. The sparse matrix algebra is handled by the CHOLMOD library, written by Tim Davis. Parts of CHOLMOD
are licensed under the GPL and parts under the LGPL. Only the LGPL pieces are used here, allowing
libdogleg to be licensed under the LGPL as well. Due to this I lose some convenience (all simple
sparse matrix arithmetic in CHOLMOD is GPL-ed) and some performance (the fancier computational
methods, such as supernodal analysis are GPL-ed). For my current applications the performance losses
are minor. =head1 FUNCTIONS AND TYPES =head2 Main API =head3 dogleg_optimize2 This is the main call to the library for I double dogleg_optimize2(double* p, unsigned int Nstate,
unsigned int Nmeas, unsigned int NJnnz,
dogleg_callback_t* f, void* cookie,
const dogleg_parameters2_t* parameters,
dogleg_solverContext_t** returnContext); =over =item * B is the initial estimate of the state vector (and holds the final result) =item * C<Nstate> specifies the number of optimization variables (elements of B ) =item * C<Nmeas> specifies the number of measurements (elements of B =item * C<NJnnz> specifies the number of non-zero elements of the jacobian matrix dB . In a
dense matrix C<Jnnz = Nstate*Nmeas>. We are dealing with sparse jacobians, so C<NJnnz> should be far
less. If not, libdogleg is not an appropriate routine to solve this problem. =item * C =item * C =item * C =item * If not C<NULL>, C<returnContext> can be used to retrieve the full
context structure from the solver. This can be useful since this structure
contains the latest operating point values. It also has an active
C<cholmod_common> structure, which can be reused if more CHOLMOD routines need
to be called externally. You usually want C<returnContext-E =back C<dogleg_optimize> returns norm2( B =head3 dogleg_optimize This is a flavor of C<dogleg_optimize2> that implicitly uses the global
parameters. It's declared as double dogleg_optimize(double* p, unsigned int Nstate,
unsigned int Nmeas, unsigned int NJnnz,
dogleg_callback_t* f, void* cookie,
dogleg_solverContext_t** returnContext); =head3 dogleg_optimize_dense2 This is the main call to the library for I double dogleg_optimize_dense2(double* p, unsigned int Nstate,
unsigned int Nmeas,
dogleg_callback_dense_t* f, void* cookie,
const dogleg_parameters2_t* parameters,
dogleg_solverContext_t** returnContext); The arguments are almost identical to those in the C<dogleg_optimize> call. =over =item * B is the initial estimate of the state vector (and holds the final result) =item * C<Nstate> specifies the number of optimization variables (elements of B ) =item * C<Nmeas> specifies the number of measurements (elements of B =item * C =item * C =item * C =item * If not C<NULL>, C<returnContext> can be used to retrieve the full
context structure from the solver. This can be useful since this structure
contains the latest operating point values. You usually want
C<returnContext-E =back C<dogleg_optimize> returns norm2( B =head3 dogleg_optimize_dense This is a flavor of C<dogleg_optimize_dense2> that implicitly uses the global
parameters. It's declared as double dogleg_optimize_dense(double* p, unsigned int Nstate,
unsigned int Nmeas,
dogleg_callback_dense_t* f, void* cookie,
dogleg_solverContext_t** returnContext); =head3 dogleg_freeContext Used to deallocate memory used for an optimization cycle. Defined as: void dogleg_freeContext(dogleg_solverContext_t** ctx); If a pointer to a context is not requested (by passing C<returnContext = NULL>
to C<dogleg_optimize>), libdogleg calls this routine automatically. If the user
I =head3 dogleg_computeJtJfactorization Computes the cholesky decomposition of JtJ. This function is only exposed if you
need to touch libdogleg internals via returnContext. Sometimes after computing
an optimization you want to do stuff with the factorization of JtJ, and this
call ensures that the factorization is available. Most people don't need this
function. If the comment wasn't clear, you don't need this function. This is declared as void dogleg_computeJtJfactorization(dogleg_operatingPoint_t* point,
dogleg_solverContext_t* ctx); The arguments are =over =item * C =item * C =back =head3 dogleg_testGradient libdogleg requires the user to compute the jacobian matrix B<J>. This is a performance optimization,
since B<J> could be computed by differences of B would not
be true. To find these types of issues, the user can call void dogleg_testGradient(unsigned int var, const double* p0,
unsigned int Nstate, unsigned int Nmeas, unsigned int NJnnz,
dogleg_callback_t* f, void* cookie); This function computes the jacobian with center differences and compares the result with the
jacobian computed by the callback function. It is recommended to do this for every variable while
developing the program that uses libdogleg. =over =item * C is the index of the variable being tested =item * C where we're evaluating the jacobian =item * C<Nstate>, C<Nmeas>, C<NJnnz> are the number of state variables, measurements and non-zero jacobian elements, as before =item * C =item * C =back This function returns nothing, but prints out the test results. =head3 dogleg_testGradient_dense Very similar to C<dogleg_testGradient>, but for dense jacobians. void dogleg_testGradient_dense(unsigned int var, const double* p0,
unsigned int Nstate, unsigned int Nmeas,
dogleg_callback_dense_t* f, void* cookie); This function computes the jacobian with center differences and compares the result with the
jacobian computed by the callback function. It is recommended to do this for every variable while
developing the program that uses libdogleg. =over =item * C is the index of the variable being tested =item * C where we're evaluating the jacobian =item * C<Nstate>, C<NJnnz> are the number of state variables, measurements =item * C =item * C =back This function returns nothing, but prints out the test results. =head3 dogleg_callback_t The main user callback that specifies the sparse optimization problem has type typedef void (dogleg_callback_t)(const double* p,
double* x,
cholmod_sparse* Jt,
void* cookie); =over =item * B is the current state vector =item * B ) =item * B<Jt> is the transpose of dB at B . As mentioned previously, B<Jt> is stored
column-first by CHOLMOD, which can be interpreted as storing B<J> row-first by the user-defined
callback routine =item * The C =back =head3 dogleg_callback_dense_t The main user callback that specifies the dense optimization problem has type typedef void (dogleg_callback_dense_t)(const double* p,
double* x,
double* J,
void* cookie); =over =item * B is the current state vector =item * B ) =item * B<J> is dB at B . B<J> is stored row-first, with all the derivatives for the
first measurement, then all the derivatives for the second measurement and so on. =item * The C =back =head3 dogleg_solverContext_t This is the solver context that can be retrieved through the C<returnContext>
parameter of the C<dogleg_optimize> call. This structure contains I typedef struct
{
cholmod_common common; union
{
dogleg_callback_t* f;
dogleg_callback_dense_t* f_dense;
};
void* cookie; // between steps, beforeStep contains the operating point of the last step.
// afterStep is ONLY used while making the step. Externally, use beforeStep
// unless you really know what you're doing
dogleg_operatingPoint_t* beforeStep;
dogleg_operatingPoint_t* afterStep; // The result of the last JtJ factorization performed. Note that JtJ is not
// necessarily factorized at every step, so this is NOT guaranteed to contain
// the factorization of the most recent JtJ
union
{
cholmod_factor* factorization; }; // Have I ever seen a singular JtJ? If so, I add this constant to the diagonal
// from that point on. This is a simple and fast way to deal with
// singularities. This constant starts at 0, and is increased every time a
// singular JtJ is encountered. This is suboptimal but works for me for now.
double lambda; // Are we using sparse math (cholmod)?
int is_sparse;
int Nstate, Nmeasurements;
} dogleg_solverContext_t; Some of the members are copies of the data passed into C<dogleg_optimize>; some
others are internal state. Of potential interest are =over =item * C =item * C<beforeStep> contains the operating point of the optimum solution. The
user can analyze this data without the need to re-call the callback routine. =back =head3 dogleg_operatingPoint_t An operating point of the solver. This is a part of C<dogleg_solverContext_t>.
Various variables describing the operating point such as B , B<J>, B // an operating point of the solver
typedef struct
{
double* p;
double* x;
double norm2_x;
union
{
cholmod_sparse* Jt;
double* J_dense; // row-first: grad0, grad1, grad2, ...
};
double* Jt_x; // the cached update vectors. It's useful to cache these so that when a step is rejected, we can
// reuse these when we retry
double* updateCauchy;
union
{
cholmod_dense* updateGN_cholmoddense;
double* updateGN_dense;
};
double updateCauchy_lensq, updateGN_lensq; // update vector lengths // whether the current update vectors are correct or not
int updateCauchy_valid, updateGN_valid; int didStepToEdgeOfTrustRegion;
} dogleg_operatingPoint_t; =head2 Parameters The optimization is controlled by several parameters. These can be set globally
for I It is not required to set any of these, but it's highly recommended to set the
initial trust-region size and the termination thresholds to match the problem
being solved. Furthermore, it's highly recommended for the problem being solved
to be scaled so that every state variable affects the objective norm2( B =head3 dogleg_setMaxIterations To set the maximum number of solver iterations, call void dogleg_setMaxIterations(int n); =head3 dogleg_setDebug To turn on diagnostic output, call void dogleg_setDebug(int debug); with a non-zero value for C By default, diagnostic output is disabled. The C if(debug == 0 ): no diagnostic output
if(debug & DOGLEG_DEBUG_VNLOG): output vnlog diagnostics to stdout
if(debug & ~DOGLEG_DEBUG_VNLOG): output human-oriented diagnostics to stderr C<DOGLEG_DEBUG_VNLOG> has a very high value, so if human diagnostics are
desired, the recommended call is: dogleg_setDebug(1); =head3 dogleg_setInitialTrustregion The optimization method keeps track of a trust region size. Here, the trust region is a ball in
R^Nstate. When the method takes a step B -> B<p + delta_p>, it makes sure that S<sqrt( norm2( B<delta_p> ) ) < trust region size>. The initial value of the trust region size can be set with void dogleg_setInitialTrustregion(double t); The dogleg algorithm is efficient when recomputing a rejected step for a smaller trust region, so
set the initial trust region size to a value larger to a reasonable estimate; the method will
quickly shrink the trust region to the correct size. =head3 dogleg_setThresholds The routine exits when the maximum number of iterations is exceeded, or a termination threshold is
hit, whichever happens first. The termination thresholds are all designed to trigger when very slow
progress is being made. If all went well, this slow progress is due to us finding the optimum. There
are 3 termination thresholds: =over =item * The function being minimized is E = norm2( B ). dE/dB = 2*B<Jt>*B ). if( for every i fabs(Jt_x[i]) < JT_X_THRESHOLD )
{ we are done; } =item * The method takes discrete steps: B -> B<p + delta_p> if( for every i fabs(delta_p[i]) < UPDATE_THRESHOLD)
{ we are done; } =item * The method dynamically controls the trust region. if(trustregion < TRUSTREGION_THRESHOLD)
{ we are done; } =back To set these threholds, call void dogleg_setThresholds(double Jt_x, double update, double trustregion); To leave a particular threshold alone, specify a negative value. =head3 dogleg_setTrustregionUpdateParameters This function sets the parameters that control when and how the trust region is updated. The default
values should work well in most cases, and shouldn't need to be tweaked. Declaration looks like void dogleg_setTrustregionUpdateParameters(double downFactor, double downThreshold,
double upFactor, double upThreshold); To see what the parameters do, look at C<evaluateStep_adjustTrustRegion> in the source. Again, these
should just work as is. =head1 BUGS The current implementation doesn't handle a singular B<JtJ> gracefully (B<JtJ> =
B<Jt> * B<J>). Analytically, B<JtJ> is at worst positive semi-definite (has 0 eigenvalues). If a
singular B<JtJ> is ever encountered, from that point on, B<JtJ> + lambda*B<I> is inverted instead
for some positive constant lambda. This makes the matrix strictly positive definite, but is
sloppy. At least I should vary lambda. In my current applications, a singular B<JtJ> only occurs if
at a particular operating point the vector B . In the general case other causes could exist, though. There's an inefficiency in that the callback always returns B =head1 AUTHOR Dima Kogan, C<< [email protected] >> =head1 LICENSE AND COPYRIGHT Copyright 2011 Oblong Industries
2017 Dima Kogan [email protected] This program is free software: you can redistribute it and/or modify it under the terms of the GNU
Lesser General Public License as published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without
even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details. The full text of the license is available at Lhttp://www.gnu.org/licenses // This is a factorization of JtJ, stored as a packed symmetric matrix
// returned by dpptrf('L', ...)
double* factorization_dense;