Question on writing database vectors in standard basis
Dear developer,
Thank you for making this very fast implementation open source! I have a question (so not a bug report) regarding the large database containing many short vectors. Because of the dimensions for free trick, a workout (for say the 1.05 SVP challenge) usually finishes in a dimension lower than the dimension of the input matrix. I was wondering if it was possible to write the vectors that are at that point in the database into the standard basis (in Python). This is easy when the last sieving dimension is the same as the dimension of the input matrix B, since then we can just take B and compute v*B. But otherwise we have to use B[l:r] according to your article. I tried computing B[l:r] using GSO, but I was unsuccessful. Is there a way to get this matrix, or some other way to get the vectors in standard basis? Thanks!
You need to multiply left by the basis, something like: g6k.M.B.multiply_left(), see e.g. here https://github.com/malb/bdd-predicate/blob/9ff858b65ca42dffd46cca9c549e1cdcc97e616f/usvp.py#L517-L522
Thank you for the quick reply! I tried that as well, but it seems that after the workout function is completed, the matrix g6k.M.B is not compatible with the vectors v in the database g6k.itervalues(). I.e. g6k.M.B.multiply_left(v) gives nonsensical long vectors. For example, if you paste the portion of code you linked just after line 460, then it doesn't work because of the above problem (I think). I believe it does work in its current place, because there is a new session of sieving that sieves in the maximal dimension of the matrix just before it: https://github.com/malb/bdd-predicate/blob/9ff858b65ca42dffd46cca9c549e1cdcc97e616f/usvp.py#L494-L500 But that extra session of sieving does not use the dimensions for free trick of the workout function, so it is slower than a workout, so I would prefer not to use it.
An easy workaround might be to call g6k.extend_left(g6k.l) before iterating over the vectors. This moves the sieve to the full context [0, r) and Babai lifts the database of short vectors in the projection to somewhat short vectors in the full lattice.
Thanks, that worked perfectly!
A note if someone else wants to do this as well: The resulting database is not sorted w.r.t. the norms of the vectors. I could not find a Python function in this module that calls a fast C++ function to sort the database directly, but as a workaround I call g6k.shrink_db(len(g6k)). This does not actually shrink the database (because of the len(g6k) argument), but it does sort the database very quickly, because the shrink_db function also calls a fast C++ function to sort the database. (Not sure if this is the best way to do this, but it works for me).