vector-quantize-pytorch icon indicating copy to clipboard operation
vector-quantize-pytorch copied to clipboard

Vector (and Scalar) Quantization, in Pytorch

Results 61 vector-quantize-pytorch issues
Sort by recently updated
recently updated
newest added
trafficstars

Hi, I can see many interesting techniques designed to prevent code collapsing. I’m curious about how the reproduction of results went—could everyone share their insights on which technique works best?

Hi, i have tried SimVQ recently, but i failed. I used the default configuration, the W matrix is a single Linear layer. But i found the activation of codebook is...

hi, I'm trying to use BSQ to train my model, and the curve of BSQ loss as follows. Is normal? I wonder that why the loss is negative at the...

VQPytorch's FSQ with symmetry on and noise dropping set to 0.5 seems to perform significantly better than the [reference implementation](https://github.com/Stability-AI/stable-codec/blob/main/stable_codec/fsq.py) in recon loss with the same settings, so I set...

As title says, why can't we have projections in lfq? https://github.com/lucidrains/vector-quantize-pytorch/blob/5ae9c792e77b45751ac636eb72b310f09648aac3/vector_quantize_pytorch/residual_lfq.py#L86 they might be useful to have a better learning residual by residual thanks

Hi! I'd like to build a bit more intuition about the latent quantizer in order to do a hyperparameter sweep. I'm trying to develop a codebook for a 1D signal,...

For the residual simVQ or residualVQ, the commit_loss shape is (1,num_quantizers) what should I do to manipulate the loss? Should I just sum?

Could you please tell me which paper/article this multi-head VQ implementation is based on?

Hi @lucidrains, Amazing work on this package, it is really helpful! I am using my the QINCo (ResVQ + implicit_neural_codebook = True) for my work. I have trained my model...

hi @lucidrains I found the same error #142 this is my code `self.vq = VectorQuantize( dim=in_channels, codebook_size=codebook_size, codebook_dim=codebook_dim, threshold_ema_dead_code=threshold_ema_dead_code, kmeans_init=False, )`