optimization doesn't appear to run currently
I've forked pygsp and started implementing the tikhonov optimization on my branch, but it appears that
- The current implementation of
optimization.prox_tvdoes not work because -
pyunlocboxdoes not yet support the non-tight frame problem. Regardless, I think I've fixedprox_tvup to this error withpyunlocbox. Once I getprox_tikdone/documented, i'll switch over topyunlocboxand see what I can do about the non-tight frame case.
Hi stanlejys, Thanks for contributing to the pygsp. I have implemented the tikonov case in this branch https://github.com/epfl-lts2/pygsp/tree/semi-supervised-learning For the Tikonov case, it not worth using proximal splitting optimization. It is better to directly compute the solution. For the TV case, the best would be to use a primal dual scheme with the gradient operator and not directly the prox_tv. I have a matlab implementation using the unlocbox in case you want to implement it for the pygsp.
ah, ok - I was just going based on the matlab/gspbox implementation of prox_tik which uses prox when a forward operator is supplied. Of course, when the forward operator is identity the Chebyshev approximation is often faster.
Where is your matlab prox_tv implementation?
It is there: https://github.com/epfl-lts2/gspbox/blob/master/graph_ml/gsp_regression_tv.m The MATLAB unlocbox does not have the most intuitive design and it is mostly my fault. Sorry for that.
I see 3 different ways to implement the TV version
- Using the prox_tv is probably the easiest way to implement it. However, it results in an algorithm with two levels of iterations.
- Using the solver
mlfbfof the pyunlocbox with thenorm_l1of the pyunlocbox. This will result in an algorithm that has only one iteration. However,mlfbfis not the best solver for that problem. - Implementing Chambolle-Pock algorithm in the pyunlocbox and using it with the
norm_l1. Probably the best way, but also require more work... (https://hal.archives-ouvertes.fr/hal-00490826/document)
Maybe you should rebase the semi-supervised-learning branch and start from there. @mdeff what do you think?
Good luck
Yes please do base your work on the semi-supervised-learning branch. It should be merged in #19 once some cleanup is done (I don't have time for it now though).
Maybe @rodrigo-pena has some thoughts on this as well.