CompressAI icon indicating copy to clipboard operation
CompressAI copied to clipboard

Question about _compress_ar in JointAutoregressiveHierarchicalPriors

Open chunbaobao opened this issue 10 months ago • 0 comments

Sorry to bother you, but I have some stupid questions. I noticed that the _compress_ar function in the JointAutoregressiveHierarchicalPriors model uses two for-loops during the encoding process to predict the distribution of y_hat for all individual pixels:

masked_weight = self.context_prediction.weight * self.context_prediction.mask
for h in range(height):
    for w in range(width):
        y_crop = y_hat[:, :, h : h + kernel_size, w : w + kernel_size]
        ctx_p = F.conv2d(
            y_crop,
            masked_weight,
            bias=self.context_prediction.bias,
        )

I was wondering if this could be implemented in parallel, similar to how the training process get the scales and means for all pixels simultaneously? (Decoding must be implemented serially.)

y = self.g_a(x)
z = self.h_a(y)
z_hat, z_likelihoods = self.entropy_bottleneck(z)
gaussian_params = self.h_s(z_hat)
scales_hat, means_hat = gaussian_params.chunk(2, 1)

Would implementing this in parallel cause any issues, such as inconsistency between training and runtime?

chunbaobao avatar Feb 16 '25 13:02 chunbaobao