mitsuba3 icon indicating copy to clipboard operation
mitsuba3 copied to clipboard

Extend projective integrators to support arbitrary sensors

Open dvicini opened this issue 1 year ago • 2 comments

Currently, the projective integrators explicitly disallow the use of any sensor that is not the standard perspective sensor: https://github.com/mitsuba-renderer/mitsuba3/blob/fda612b6cf3107fabb84dead1ca9c171a48e1314/src/python/python/ad/projective.py#L108

I wanted to use the batch sensor to cut down some Python overheads, but currently cannot due to this sensor Jacobian implementation being very much specialized to perspective sensors.

This isn't urgent, but it would be good to eventually support the batch sensor for projective sensors as well.

dvicini avatar May 06 '24 13:05 dvicini

Also discussed for orthographic in #1160.

merlinND avatar May 14 '24 08:05 merlinND

How is the function perspective_sensor_jacobian() derived? The calculation of J_num/J_den*multiplier is confusing.

def perspective_sensor_jacobian(self,
                                    sensor: mi.Sensor,
                                    ss: mi.SilhouetteSample3f):
        """
        The silhouette sample `ss` stores (1) the sampling density in the scene
        space, and (2) the motion of the silhouette point in the scene space.
        This Jacobian corrects both quantities to the camera sample space.
        """
        if not sensor.__repr__().startswith('PerspectiveCamera'):
            raise Exception("Perspective cameras are supported")

        to_world = sensor.world_transform()
        near_clip = sensor.near_clip()
        sensor_center = to_world @ mi.Point3f(0)
        sensor_lookat_dir = to_world @ mi.Vector3f(0, 0, 1)
        x_fov = mi.traverse(sensor)["x_fov"][0]
        film = sensor.film()

        camera_to_sample = mi.perspective_projection(
            film.size(),
            film.crop_size(),
            film.crop_offset(),
            x_fov,
            near_clip,
            sensor.far_clip()
        )

        sample_to_camera = camera_to_sample.inverse()
        p_min = sample_to_camera @ mi.Point3f(0, 0, 0)
        multiplier = dr.sqr(near_clip) / dr.abs(p_min[0] * p_min[1] * 4.0)

        # Frame
        frame_t = dr.normalize(sensor_center - ss.p)
        frame_n = ss.n
        frame_s = dr.cross(frame_t, frame_n)

        J_num = dr.norm(dr.cross(frame_n, sensor_lookat_dir)) * \
                dr.norm(dr.cross(frame_s, sensor_lookat_dir)) * \
                dr.abs(dr.dot(frame_s, ss.silhouette_d))
        J_den = dr.sqr(dr.sqr(dr.dot(frame_t, sensor_lookat_dir))) * \
                dr.squared_norm(ss.p - sensor_center)

        return J_num / J_den * multiplier

ejtwe avatar May 21 '24 08:05 ejtwe