dpq_embedding_compression icon indicating copy to clipboard operation
dpq_embedding_compression copied to clipboard

About the Compression

Open SecretGuardian opened this issue 5 years ago • 0 comments

Hi thanks for sharing the great work. I run the LM PTB demo and finished the full embed and kdq embed training. But according to the results, for the full embed, the final results are:

total 110508 5094892 Oct 27 23:21 model-46445.meta 79 Oct 27 23:21 checkpoint 79100812 Oct 27 23:21 model-46445.data-00000-of-00001 465 Oct 27 23:21 model-46445.index 18189654 Oct 27 19:14 graph.pbtxt

while the kdq embed results are:

total 112252 5324218 Oct 27 23:42 model-46445.meta 79 Oct 27 23:42 checkpoint 79767436 Oct 27 23:42 model-46445.data-00000-of-00001 634 Oct 27 23:42 model-46445.index 18567704 Oct 27 19:13 graph.pbtxt

It seems like the kdq model is even larger. How should I estimate the real compression performance? Thanks. (I used K=128, D=50, type=smx, share subspace=False, additive quantization=False)

SecretGuardian avatar Oct 29 '20 00:10 SecretGuardian