pyoptsparse icon indicating copy to clipboard operation
pyoptsparse copied to clipboard

Support calculation of only active constraint derivatives

Open A-CGray opened this issue 8 months ago • 2 comments

Description of feature

NLPQLP supports only computing the derivatives of the constraints that it currently considers active, which it tells you through the active array that it passes to pyoptsparse nlgrad function. It would be great if we could support this capability in pyOptSparse as it has the potential to speed up problems with many constraints.

Image

Image

To demonstrate, NLPQLP converges just fine with the following:

def nlgrad(m, me, mmax, n, f, g, df, dg, x, active, wa):
    gobj, gcon, fail = self._masterFunc(x, ["gobj", "gcon"])
    df[0:n] = gobj.copy()
    # Only copy the active constraints
    activeConIndices = np.where(active[0:m] != 0)[0]
    dg[activeConIndices, 0:n] = -gcon.copy()[activeConIndices]
    return df, dg

Although clearly this implementation doesn't actually save any time since it still computes the entire constraint jacobian.

Potential solution

The implementation would be quite involved, requiring at least the following:

  • Mapping the active constraints as seen by NLPQLP back to form defined by the user (e.g undoing the conversion from two-sided to one-sided constraints)
  • Determining a way to pass the active constraint information to the user's sens function (e.g sens(self, xdict, funcs, activeCon)), and doing so in a way they can still use the current sens signature (sens(self, xdict, funcs))
  • Determining a standard for how the user is expected to pass back the active constraint values (I can see this being complicated for large sparse constraints if we want to support only computing the derivatives for the active entries of those constraints)

A-CGray avatar Apr 18 '25 01:04 A-CGray

Just a few thoughts to chime in with:

  • Passing args back can be handled via an option in the python layer, where the sens callback by default is what we have now, and this "active_only" mode once specified will call with an additional argument specifying the active constraint indices or something
  • I think the actual implementation will get pretty messy because from the user's perspective all constraints are string-indexed and can represent vectored values for each. So, the user would have to know e.g. it's the third index of this constraint group. And often times, the entire constraint vector needs to be computed anyways which makes this point a little moot, e.g. the CL constraint being active but CM is not, or something like that.
  • NLPQLP does not support sparse gradients AFAIK so at least that part is not needed.

ewu63 avatar Apr 18 '25 04:04 ewu63

My best idea for passing the active/inactive data to the user would be to basically mirror the form of the funcs dictionary but with numerical values replaced by booleans, something like:

activeCon = {
    "scalarCon" : True,
    "vectorCon : array([True, False, False, True, ... ]),
    ...
}

The vector constraints definitely pose an issue and we'd have to decide whether we allow the user to return gradient info for only the active entries of the vector constraint, or force them to return values that at least have the shape of the full jacobian for that constraint even if we then end up ignoring the inactive parts. Alternatively we could decide to only support active/inactive flags at the key level, so we only treat vectorCon as inactive if all its entries are inactive.

Even if NLPQLP doesn't support sparse gradients, the user-facing interface should be optimiser agnostic no? So it will need to support gradients returned in sparse format by the user.

A-CGray avatar Apr 18 '25 12:04 A-CGray