lightning icon indicating copy to clipboard operation
lightning copied to clipboard

L1-constrained regression using Frank-Wolfe

Open mblondel opened this issue 10 years ago • 11 comments

I implemented the FW method for L1-constrained regression. The method is greedy in nature : at most one non-zero coefficient is added at every iteration. I added an option to stop when a model size limit is reached.

Regularization paths of constrained (FW) and penalized (coordinate descent) formulations on the diabetes dataset: fw

I still need to add docstrings and tests.

CC @fabianp @zermelozf @vene

mblondel avatar Dec 03 '15 14:12 mblondel

Cool ! I'll be offline until Monday, then I'll definitely take a look into it.

fabianp avatar Dec 03 '15 14:12 fabianp

since it is specific to least squares I would call it _frank_wolfe_ls (and also because I would love a FW for arbitrary loss functions :-).

Just FYI yesterday I saw this paper (http://arxiv.org/pdf/1511.05932.pdf), in which they prove that a small modification to FW yield an algorithm with linear convergence rate, although I don't know whether in practice this has always a major effect.

fabianp avatar Dec 11 '15 11:12 fabianp

since it is specific to least squares I would call it _frank_wolfe_ls (and also because I would love a FW for arbitrary loss functions :-)

This shouldn't be too hard. The only difficulty I see is for computing the step size. For arbitrary loss it's not possible to compute the exact step size but a few iterations of one-dimensional Newton method should work.

mblondel avatar Dec 11 '15 11:12 mblondel

What about using scipy's line_search for arbitrary loss functions and the exact one for the quadratic one?

fabianp avatar Dec 11 '15 11:12 fabianp

Not sure, does it work for constrained optimization?

mblondel avatar Dec 11 '15 13:12 mblondel

you are right, probably not

fabianp avatar Dec 14 '15 09:12 fabianp

I was just reading on hybrid conditional gradient - smoothing and it seems like it could lead to an efficient way to extend this with an additional group lasso penalty. (see eq 5.3 and alg. 5)

vene avatar Mar 24 '16 17:03 vene

@vene @fabianp For Frank-Wolfe, the regularization path matches the one of the penalized version trained by coordinate descent, as expected. However, for FISTA with penalty=l1-ball, it doesn't. Either FISTA is not supposed to handle constrained problems or there is a problem with my implementation. I am observing issues on constrained problems with this implementation too.

mblondel avatar Jan 17 '17 13:01 mblondel

In theory FISTA should work with constrained problems using the projection as proximal operator. Have you tried with simple ISTA=projected gradient descent to debug (trivial to implement)?

fabianp avatar Jan 17 '17 19:01 fabianp

Bach et al (p12 here) say that "proximal methods apply" and don't seem to suggest the need for any special treatment.

vene avatar Jan 18 '17 01:01 vene

There was an issue in project_simplex, see dfb8586d3069beaaef0bcc3d1769fa8316d675f2.

With a small tweak to the plot code, FISTA is now looking fine.

mblondel avatar Jan 18 '17 04:01 mblondel