consistencydecoder
consistencydecoder copied to clipboard
Model Device Allocation Issue Affecting Parallel Computation
Hello, I appreciate the work on the Consistency Decoder. I've run into an issue with the model from the repository. It's hard-coded to use torch.device("cuda:0")
, which is problematic for parallel computation:
input = torch.to(features, torch.device("cuda:0"), 6)
This prevents the model from running on multiple GPUs. Could you suggest a way to modify the model to dynamically select the device, allowing for parallel GPU processing?
Thank you for your assistance.