DROID-SLAM
DROID-SLAM copied to clipboard
how to process backend on multi-gpus?
how to process backend on multi-gpus?
can the frontend and backend be processed respectively in two GPUS?
I process the froentend in 5 GPUs ,and report errors: ii, jj = torch.as_tensor(es, device=self.device).unbind(dim=-1) ValueError: not enough values to unpack (expected 2, got 0) how to process the backend in multi GPUS ?
Hi, I also encountered this problem, did you solve it?
no😅😅
---Original--- From: @.> Date: Wed, Apr 27, 2022 12:48 PM To: @.>; Cc: @.@.>; Subject: Re: [princeton-vl/DROID-SLAM] how to process backend on multi-gpus?(Issue #42)
Hi, I also encountered this problem, did you solve it?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..
Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..
I have the same question.
这是来自QQ邮箱的假期自动回复邮件。 您好,已经收到您的邮件,无法亲自回复,还请见谅。
这是来自QQ邮箱的假期自动回复邮件。 您好,已经收到您的邮件,无法亲自回复,还请见谅。