shrink_db: killsize != savesize bug
Dear developer,
I met the problem while running the 148-dimension pump algorithm, but it never happened on lower dimensions. How do you think I could fix it?
Thanks!
It seems the dbsize has passed the 2^32 boundary, it could be that an internal integer type is too small.
It seems the dbsize has passed the 2^32 boundary, it could be that an internal integer type is too small.
So if I change the float type from "dd" to "qd", then the problem can be solved ?
No, I was talking about an integer type, not floating point.
It is well known that an integer type wraps around to its lowest possible value after reaching its highest possible value.
Likely we assumed a database size would always be less than 2^32 and used a 32-bit unsigned integer somewhere (like unsigned int), instead of a 64-bit unsigned integer (size_t).
No, I was talking about an integer type, not floating point. It is well known that an integer type wraps around to its lowest possible value after reaching its highest possible value. Likely we assumed a database size would always be less than 2^32 and used a 32-bit unsigned integer somewhere (like
unsigned int), instead of a 64-bit unsigned integer (size_t).
Assuming that's the reason, then I can solve it by simply modifying the integer type of the db_size and then recompiling the kernel?
That would be the general idea.
I did notice shrink_db uses unsigned long, but on the other hand: grow_db does as well...
Can you maybe give the compiler and version, which OS and which CPU you are using?
At the very least the following typedef of IT should be changed to uint64_t (and then recompiled):
https://github.com/fplll/g6k/blob/88fdac1775585ef2a0269ef29cebf158cd5719d0/kernel/siever.h#L120C8-L120C132
I don't know how well all the internal procedures really use this typedef so there might still be some unexpected bugs.
That would be the general idea. I did notice shrink_db uses
unsigned long, but on the other hand: grow_db does as well... Can you maybe give the compiler and version, which OS and which CPU you are using?
Sure, ubuntu-22.04.3, Intel(R) Xeon(R) Platinum 8462Y+
At the very least the following typedef of
ITshould be changed to uint64_t (and then recompiled): https://github.com/fplll/g6k/blob/88fdac1775585ef2a0269ef29cebf158cd5719d0/kernel/siever.h#L120C8-L120C132 I don't know how well all the internal procedures really use this typedef so there might still be some unexpected bugs.
Thanks, I'll try
At the very least the following typedef of
ITshould be changed to uint64_t (and then recompiled): https://github.com/fplll/g6k/blob/88fdac1775585ef2a0269ef29cebf158cd5719d0/kernel/siever.h#L120C8-L120C132 I don't know how well all the internal procedures really use this typedef so there might still be some unexpected bugs.
I have another question: since the db_size is limited to 2^32 -1, which corresponds to the 146-dimensional pump sieve , how do you solve the 180-dimensional SVP challenge by running a 150-dimensional pump sieve under the constraint?
I have another question: since the db_size is limited to 2^32 -1, which corresponds to the 146-dimensional pump sieve , how do you solve the 180-dimensional SVP challenge by running a 150-dimensional pump sieve under the constraint?
By limiting the database size, for example with the parameter db_size_factor. Typically this will slow the sieving down, and when going too low even make saturation impossible. Some of the sieve implementations are somewhat of a hybrid between a pair and triple sieve, and will somewhat automatically start to behave as a (much slower) but more memory efficient triple sieve when restraining the memory.
I have another question: since the db_size is limited to 2^32 -1, which corresponds to the 146-dimensional pump sieve , how do you solve the 180-dimensional SVP challenge by running a 150-dimensional pump sieve under the constraint?
By limiting the database size, for example with the parameter
db_size_factor. Typically this will slow the sieving down, and when going too low even make saturation impossible. Some of the sieve implementations are somewhat of a hybrid between a pair and triple sieve, and will somewhat automatically start to behave as a (much slower) but more memory efficient triple sieve when restraining the memory.
Thank you for your response. I have two additional questions:
- I set the parameters for solving the 180-dimensional SVP challenge based on section 7.2 of the GPU-G6K paper, but why did the actual run reach 1700GB, exceeding 1.5TB? Did you adjust any other parameters?
- The GPU-G6K paper mentions, "However, for unlucky cases, we allow G6K workouts of up to 4 dimensions larger without further increasing the database size." This aligns with the exceeded pump dimensions when solving the 180-dimensional SVP challenge. Does this mean that db_size_factor was less than 2.77 when solving the 180-dimensional SVP challenge? Or does this statement indicate that other measures were taken?
For those runs we indeed implemented a hard limit on the database size so we would never go over the 1.5TB. You should be able to do something similar with a few changes in the python layer.
Adapting the GPU version to allow for more than 2^32 vectors might be some more work given that we might have hardcoded uint32 types in certain parts of the implementation.
For those runs we indeed implemented a hard limit on the database size so we would never go over the 1.5TB. You should be able to do something similar with a few changes in the python layer.
Adapting the GPU version to allow for more than 2^32 vectors might be some more work given that we might have hardcoded uint32 types in certain parts of the implementation.
May I ask if the db_size_limit parameter is modified in the siever.h file? It seems that I did not see the relevant parameter settings for db_size_limit.
The db_size_limit parameter is not something that is implemented in the main G6K implementation. We did implement it in the GPU version. Overall it is simply adding the parameter and adding something like this to the (c)python code: https://github.com/WvanWoerden/G6K-GPU-Tensor/blob/3db00e8c5740ec6ae2458b0165cf82f580046d40/g6k/siever.pyx#L1040
The
db_size_limitparameter is not something that is implemented in the main G6K implementation. We did implement it in the GPU version. Overall it is simply adding the parameter and adding something like this to the (c)python code: https://github.com/WvanWoerden/G6K-GPU-Tensor/blob/3db00e8c5740ec6ae2458b0165cf82f580046d40/g6k/siever.pyx#L1040
Indeed, this parameter can limit the further increasing of the database; however, the previous issue remains - the GPU-G6K paper mentions, "However, for unlucky cases, we allow G6K workouts of up to 4 dimensions larger without further increasing the database size." I would like to inquire how the number 4 is determined, and whether the larger dimension of sieving would launch ineffectively when the database is restricted?