Mateen Ulhaq

Results 162 comments of Mateen Ulhaq

I think that looks fine for the untrained model. For the trained model, check: ```python print(net.entropy_bottleneck.eb_l.quantiles) print(net.entropy_bottleneck.eb_h.quantiles) ``` The values should all be different from `[-10, 0, 10]`, and look...

Load the trained model using: ```python from compressai.zoo.image import model_architectures as architectures architectures["your-model"] = YourModel def load_checkpoint(arch: str, no_update: bool, checkpoint_path: str) -> nn.Module: # update model if need be...

1. During training, both methods use noise-based quantization and ignore the means parameter. 2. During validation/inference, the first method is indeed different from the second. Due to (2), it might...

The likelihood map $l = -\log_2 p_{\hat{y}}(\hat{y})$ of dimensions $M_y \times \frac{H}{2^4} \times \frac{W}{2^4}$ can be calculated exactly, so you can plot each the bit cost of each latent element....

#### bmshj2018-factorized NLLs (negative log likelihoods): ```python import matplotlib.pyplot as plt import torch.nn.functional as F from compressai.zoo import bmshj2018_factorized from PIL import Image from torchvision import transforms device = "cuda"...

The simplest and most effective rule is to add line breaks after each string literal, *regardless of the content* (e.g. `"\n"`), if the number of concatenated string literals is ≥...

Proposed fix: ```diff diff --git a/src/betterproto/casing.py b/src/betterproto/casing.py index f7d0832..d09c708 100644 --- a/src/betterproto/casing.py +++ b/src/betterproto/casing.py @@ -8,11 +8,11 @@ SYMBOLS = "[^a-zA-Z0-9]*" # Optionally capitalized word. # language=PythonRegExp -WORD = "[A-Z]*[a-z]*[0-9]*"...

First train using MSE, then fine-tune with MS-SSIM. https://interdigitalinc.github.io/CompressAI/zoo.html > MS-SSIM optimized networks were fine-tuned from pre-trained MSE networks (with a learning rate of 1e-5 for both optimizers).

Can you provide the commands used for both tests? Also, `16550.468/2760.671 = 5.995 ≈ 6`, so perhaps there is a missing multiplication factor in the measurement somewhere?

> the bitrate computed using the saved file size is much larger than the bitrate computed using the likelihoods from the entropy model (about 2.7 bpp in comparison to 1.1...