CompressAI
CompressAI copied to clipboard
Different outputs for compress function on different devices (Local vs Jetson Nano)
y_strings = context_model.entropy_bottleneck.compress(q_latent)
Thank you for the great work on this project. I’ve encountered an issue where running the compress function on my local machine produces different results (y_strings) compared to running the same code on a Jetson Nano, using the same input. Could the differences in output be due to hardware-specific optimizations (e.g., mixed precision on the Jetson Nano) or the framework handling operations differently on different architectures? Do you have any recommendations on how I can ensure consistent outputs between the two devices?
Please see:
- https://github.com/InterDigitalInc/CompressAI/issues/235#issuecomment-1740940306
- https://github.com/InterDigitalInc/CompressAI/issues/279#issuecomment-2019388285