pytorch-deform-conv-v2
pytorch-deform-conv-v2 copied to clipboard
deform_conv_3d
Hi, I want to use 3D deform_conv in my study, I read you 'deform_conv_v2.py', I try to extend your code to 3D deform_conv, while I no idea how to modify these code below: q_lt = torch.cat([torch.clamp(q_lt[..., :N], 0, x.size(2)-1), torch.clamp(q_lt[..., N:], 0, x.size(3)-1)], dim=-1).long() q_rb = torch.cat([torch.clamp(q_rb[..., :N], 0, x.size(2)-1), torch.clamp(q_rb[..., N:], 0, x.size(3)-1)], dim=-1).long() q_lb = torch.cat([q_lt[..., :N], q_rb[..., N:]], dim=-1) q_rt = torch.cat([q_rb[..., :N], q_lt[..., N:]], dim=-1)
for 3D data, we should use trilinear, which need eight sample number, while for 2D data, we only need four sample number to perform bilinear interpolation. I have no idea how to get other six sample location? Could you help me?
Best wishes, Meixiang Huang
Hi, I want to use 3D deform_conv in my study, I read you 'deform_conv_v2.py', I try to extend your code to 3D deform_conv, while I no idea how to modify these code below: q_lt = torch.cat([torch.clamp(q_lt[..., :N], 0, x.size(2)-1), torch.clamp(q_lt[..., N:], 0, x.size(3)-1)], dim=-1).long() q_rb = torch.cat([torch.clamp(q_rb[..., :N], 0, x.size(2)-1), torch.clamp(q_rb[..., N:], 0, x.size(3)-1)], dim=-1).long() q_lb = torch.cat([q_lt[..., :N], q_rb[..., N:]], dim=-1) q_rt = torch.cat([q_rb[..., :N], q_lt[..., N:]], dim=-1)
for 3D data, we should use trilinear, which need eight sample number, while for 2D data, we only need four sample number to perform bilinear interpolation. I have no idea how to get other six sample location? Could you help me?
Best wishes, Meixiang Huang
我也不懂,我想弄成1D的,但是有些代码看不懂在做什么,我的QQ是452128995可以交流下吗
Not pretty confident. But I think this shall be the idea.
q_slt = torch.cat([
torch.clamp(q_slt[..., :N], 0, x.size(2) - 1),
torch.clamp(q_slt[..., N:2 * N], 0, x.size(3) - 1),
torch.clamp(q_slt[..., 2 * N:], 0, x.size(4) - 1)
], dim=-1).long()
q_drb = torch.cat([
torch.clamp(q_drb[..., :N], 0, x.size(2) - 1),
torch.clamp(q_drb[..., N:2 * N], 0, x.size(3) - 1),
torch.clamp(q_drb[..., 2 * N:], 0, x.size(4) - 1)
], dim=-1).long()
# Surface
q_slb = torch.cat([q_slt[..., :N], q_slt[..., N:2 * N], q_drb[..., 2 * N:]], dim=-1)
q_srt = torch.cat([q_slt[..., :N], q_drb[..., N:2 * N], q_slt[..., 2 * N:]], dim=-1)
q_srb = torch.cat([q_slt[..., :N], q_drb[..., N:2 * N], q_drb[..., 2 * N:]], dim=-1)
# Deep
q_dlt = torch.cat([q_drb[..., :N], q_slt[..., N:2 * N], q_slt[..., 2 * N:]], dim=-1)
q_dlb = torch.cat([q_drb[..., :N], q_slt[..., N:2 * N], q_drb[..., 2 * N:]], dim=-1)
q_drt = torch.cat([q_drb[..., :N], q_drb[..., N:2 * N], q_slt[..., 2 * N:]], dim=-1)
Yes, your idea is the same with me. While, Using deform_conv_3d in 3D UNet, it easily suffer memory limit.
2020-06-22 18:34:22shijianjian [email protected]写道:
Not pretty confident. But I think this shall be the idea.
q_slt=torch.cat([ torch.clamp(q_slt[..., :N], 0, x.size(2) -1), torch.clamp(q_slt[..., N:2N], 0, x.size(3) -1), torch.clamp(q_slt[..., 2N:], 0, x.size(4) -1) ], dim=-1).long() q_drb=torch.cat([ torch.clamp(q_drb[..., :N], 0, x.size(2) -1), torch.clamp(q_drb[..., N:2N], 0, x.size(3) -1), torch.clamp(q_drb[..., 2N:], 0, x.size(4) -1) ], dim=-1).long() # Surfaceq_slb=torch.cat([q_slt[..., :N], q_slt[..., N:2N], q_drb[..., 2N:]], dim=-1) q_srt=torch.cat([q_slt[..., :N], q_drb[..., N:2N], q_slt[..., 2N:]], dim=-1) q_srb=torch.cat([q_slt[..., :N], q_drb[..., N:2N], q_drb[..., 2N:]], dim=-1) # Deepq_dlt=torch.cat([q_drb[..., :N], q_slt[..., N:2N], q_slt[..., 2N:]], dim=-1) q_dlb=torch.cat([q_drb[..., :N], q_slt[..., N:2N], q_drb[..., 2N:]], dim=-1) q_drt=torch.cat([q_drb[..., :N], q_drb[..., N:2N], q_slt[..., 2N:]], dim=-1)
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi guys, I also suffer the problem of memory explode, when I achieve the 3D deformable conv in 3D UNet. Have you got any solution to avoid this problem?
Hi guys, I also suffer the problem of memory explode, when I achieve the 3D deformable conv in 3D UNet. Have you got any solution to avoid this problem?
How du u conduct that, I want to do this work for sometimes.