vqtorch
vqtorch copied to clipboard
Request for new features: Diverse codebook size in RVQ
Thanks for your contribution to propose this inspiring work.
I would like to request for the support of different codebook size in residual quantization and weighted loss function.
It might be conflict with the share attribute, but I think it would be a reasonable extension. Also, since the generated codes are in a course-to-fine manner, the first primary codes should have a larger loss weight during training.
I hope you and your team can consider these two features, and I think other quantization variants (e.g., product quantization) are also compatible with them.