consistencydecoder icon indicating copy to clipboard operation
consistencydecoder copied to clipboard

Model Device Allocation Issue Affecting Parallel Computation

Open Vanint opened this issue 1 year ago • 0 comments

Hello, I appreciate the work on the Consistency Decoder. I've run into an issue with the model from the repository. It's hard-coded to use torch.device("cuda:0"), which is problematic for parallel computation:

input = torch.to(features, torch.device("cuda:0"), 6)

This prevents the model from running on multiple GPUs. Could you suggest a way to modify the model to dynamically select the device, allowing for parallel GPU processing?

Thank you for your assistance.

Vanint avatar Nov 08 '23 05:11 Vanint