Rasterize renders far away depth before near depth
I have a similar issue with depth rasterization.
![]()
face_vertices_camera, face_vertices_image, face_normals = self.prepare_vertices(verts.to(self.device), faces.to(self.device), intrinsics=self.intrinsics, camera_transform=camera_transform) face_vertices_z = face_vertices_camera[:, :, :, -1].contiguous() face_vertices_image = face_vertices_image.contiguous() uv_face_attr = uv_face_attr.contiguous() depth, face_idx = kal.render.mesh.rasterize(texture_h, texture_w, face_vertices_z, face_vertices_image, face_features=face_vertices_z.unsqueeze(3), backend=self.backend) def prepare_vertices(self, vertices, faces, intrinsics, camera_transform): padded_vertices = torch.nn.functional.pad(vertices, (0, 1), mode='constant', value=1.) if len(camera_transform.shape) == 2: camera_transform = camera_transform.unsqueeze(0) if camera_transform.shape[1] == 4: # want 3x4 camera_transform = camera_transform[:, :3, :].transpose(1, 2) vertices_camera = (padded_vertices @ camera_transform) vertices_image = intrinsics.transform(vertices_camera)[:, :, :2] face_vertices_camera = kal.ops.mesh.index_vertices_by_faces(vertices_camera, faces) face_vertices_image = kal.ops.mesh.index_vertices_by_faces(vertices_image, faces) face_normals = kal.ops.mesh.face_normals(face_vertices_camera, unit=True) return face_vertices_camera, face_vertices_image, face_normalsI wrote my own rasterizer function, which however is a lot slower. It should look like this, note it's a different aspect ratio in this image.
![]()
Similar issue to #736
Hi @ChlaegerIO , what backend are you using for the rasterization?
I have tried nvdiffrast and cuda with the same result.
Hi @ChlaegerIO would you be able to share your scene so I can have a look?
I could not upload the .obj file, so I store it here: https://polybox.ethz.ch/index.php/s/QKGZqRcDzSp4oWw.
How are you converting the depth to image? The depth values are negative values. I just tried with the following code:
def render(camera):
vertices_camera = camera.extrinsics.transform(mesh.vertices)
vertices_clip = camera.intrinsics.project(vertices_camera)
faces_int = mesh.faces.int()
rast = nvdiffrast.torch.rasterize(nvdiffrast_context, vertices_clip, faces_int,
(camera.height, camera.width), grad_db=False)
# Nvdiffrast rasterization contains u, v, z/w, triangle_id onf shape 1 x W x H x 4
rast0 = torch.flip(rast[0], dims=(1,)) # why filp?
face_idx = (rast0[..., -1].long() - 1).contiguous()
im_depth = (nvdiffrast.torch.interpolate(
vertices_camera[..., -1:].contiguous(), rast0, faces_int
)[0][0].repeat(1, 1, 3) * -0.2).clamp(0., 1.) * 255
return im_depth.to(torch.uint8)
you can see here I'm scaling the value from [-5, 0.] => [0., 1.] before converting to 255.
We should add depth to our easy_render api
After the rasterizer depth, face_idx = kal.render.mesh.rasterize(texture_h, texture_w, face_vertices_z, face_vertices_image, face_features=face_vertices_z.unsqueeze(3), backend=self.backend) I normalize it to [0,1] with depth = (depth - depth.min()) / (depth.max() - depth.min()) before converting to 255.
Stale issue, please reopen if still relevant