chop
chop copied to clipboard
Rank 1 tensor decomposition
We'd like to have a tensor LMO for tensor nuclear norm constraints. Therefore, an implementation of https://hal.archives-ouvertes.fr/hal-01385538/file/SeROAP_final.pdf (or some other rank 1 tensor decomposition algorithm) would be nice.
Our NuclearNormBall.lmo should do this by default for tensor inputs of shape (batch_size, *shape)
.
random thought: can we get away with an (unbiased estimator of) the largest eigenvalue? We could get this for example from the power method + russian roulette
Good question -- how many iterations would you perform with RR? I think I'm currently doing 10 and it works well enough for the matrix LMO.
It might be more interesting for higher order tensors.
I don't know. Now that I think about it its not even clear that for the LMO an unbiased estimator is better than an inexact one (i.e., deterministic but with a bounded error as would be the case in truncated power method)