aimet icon indicating copy to clipboard operation
aimet copied to clipboard

How to wrap a sequence of operations as One quantization layer?

Open cvsod opened this issue 3 years ago • 0 comments

code of a sequence of operations `

with torch.no_grad():
        # proj = torch.matmul(src_proj, torch.inverse(ref_proj))
        proj = torch.matmul(pose, invK)
        proj = torch.matmul(K, proj)

        rot = proj[:, :3, :3]  # [B,3,3]
        trans = proj[:, :3, 3:4]  # [B,3,1]

        y, x = torch.meshgrid([torch.arange(0, 128, dtype=torch.float32,
                                            device=torch.device('cuda:0')),
                               torch.arange(0, 256, dtype=torch.float32,
                                            device=torch.device('cuda:0'))])

        y, x = y.contiguous(), x.contiguous()
        y, x = y.view(128 * 256), x.view(128 * 256)
        xyz = torch.stack((x, y, torch.ones_like(x)))  # [3, H*W]
        # xyz = torch.unsqueeze(xyz, 0).repeat(8, 1, 1)  # [B, 3, H*W]
        rot_xyz = torch.matmul(rot, xyz)  # [B, 3, H*W]
        rot_depth_xyz = rot_xyz.unsqueeze(2).repeat(1, 1, 32, 1) * depth_values.view(-1, 1, 32,
                                                                                     128 * 256)  # [B, 3, Ndepth, H*W]
        proj_xyz = rot_depth_xyz + trans.view(-1, 3, 1, 1)  # [B, 3, Ndepth, H*W]
        # proj_xyz[:, 2:3] += 0.00001

        proj_xy = proj_xyz[:, :2, :, :] / proj_xyz[:, 2:3, :, :]  # [B, 2, Ndepth, H*W]
        proj_x_normalized = proj_xy[:, 0, :, :] / ((256 - 1) / 2) - 1
        proj_y_normalized = proj_xy[:, 1, :, :] / ((128 - 1) / 2) - 1
        proj_xy = torch.stack((proj_x_normalized, proj_y_normalized), dim=3)  # [B, Ndepth, H*W, 2]
        grid = proj_xy

    warped_src_fea_ = F.grid_sample(src_fea, grid.view(-1, 32 * 128, 256, 2), mode='bilinear',
                                    padding_mode='zeros', align_corners=True).type(torch.float32)
    warped_src_fea = warped_src_fea_.view(-1, 8, 32, 128*256)`

I want to warp the above operations as One quantization layer, how can i do it? Thanks!

cvsod avatar Sep 20 '22 03:09 cvsod