pytorch3d icon indicating copy to clipboard operation
pytorch3d copied to clipboard

About Pytorch3D single card parallel rendering acceleration

Open wang1528186571 opened this issue 4 months ago • 0 comments
trafficstars

My environment: Pytroch3D:0.7.7

My Camera

def update_camera(self, batch_size=1):
    import pytorch3d.renderer as p3dr
    bs       = batch_size
    azim_deg = torch.rand(bs, device=self.device) * 360. - 180.
    azim_rad = torch.deg2rad(azim_deg)
    dist     = 2.0

    eye_x = dist * torch.sin(azim_rad)
    eye_z = dist * torch.cos(azim_rad)
    eye_y = torch.full((bs,), 2.5, device=self.device)

    eye = torch.stack([eye_x, eye_y, eye_z], dim=1)

    R, T = p3dr.look_at_view_transform(
        eye=eye,
        at=torch.tensor([[0.0, 1.85/2, 0.0]], dtype=torch.float32,
                        device=self.device).expand(bs, -1),
        up=torch.tensor([[0.0, 1.0, 0.0]], dtype=torch.float32,
                        device=self.device).expand(bs, -1),
    )

    self.cameras = p3dr.FoVPerspectiveCameras(device=self.device, R=R, T=T, fov=45)

    rs = p3dr.RasterizationSettings(image_size=800, blur_radius=1e-6,
                                    faces_per_pixel=3, bin_size=0)
    self.renderer = p3dr.MeshRenderer(
        rasterizer=p3dr.MeshRasterizer(cameras=self.cameras, raster_settings=rs),
        shader=MyHardPhongShader(
            device=self.device, cameras=self.cameras,
            lights=getattr(self, "lights", AmbientLights(device=self.device)),
            blend_params=p3dr.BlendParams(sigma=1e-4, gamma=1e-4)
        )
    )

My Light:

   def sample_lights(self):
        """固定 1 盏环境光,shader 会 broadcast。"""
        self.lights = AmbientLights(device=self.device)

My Render:

def synthesis_image_person_batch(
            self,
            mesh_dict_list: List[dict],           # [{'man_path': Meshes, 'mesh_root': Path}, ...]
            adv_patch_input: Optional[torch.Tensor] = None
    ) -> List[Optional[torch.Tensor]]:

        if not mesh_dict_list:
            return []

        bs = len(mesh_dict_list)                       # 当前 batch 大小
        self.update_camera(batch_size=bs)              # 批相机 (每张不同视角)
        self.sample_lights()                           # 单盏灯,shader 自动 broadcast

        # ---------- ① 只随机挑 1 个人物作为“模板” -----------------
        md_base = random.choice(mesh_dict_list)        # 随机动作 / pose
        rand    = sample_fixed_meshes(md_base["mesh_root"], self.device)
        parts   = {
            "man"   : md_base["man_path"],             # pose mesh
            "clothes": rand.get("clothes_path"),
            "pants"  : rand.get("pants_path"),
        }

        if adv_patch_input is not None:
            self.set_adv_patch_texture(adv_patch_input, parts)

        base_mesh   = MU.join_meshes([m for m in parts.values() if m is not None])

        # ---------- ② 批量复制同一 mesh ---------------------------
        batch_mesh  = base_mesh.extend(bs)             # <— 关键一步

        # ---------- ③ 渲染 ----------------------------------------
        batch_rgba  = self.renderer(batch_mesh,
                                    cameras=self.cameras,
                                    lights =self.lights)          # [B,H,W,4]

I currently have a 4080 Super graphics card, and I hope that it can have two renderers rendering at the same time. I have referred to this example, and it seems that it also queues multiple cameras for rendering, but the speed is still very slow. https://pytorch3d.org/tutorials/render_textured_meshes

My current situation is that if I set the batch to 10, that is, there are 10 cameras for one mesh, I expect that 10 cameras will render 10 meshes together instead of queuing them. Please help me, thank you

wang1528186571 avatar Jun 30 '25 13:06 wang1528186571