mitsuba3
mitsuba3 copied to clipboard
Angle of polarization of surfaces behave unexpectedly like textures when rotating camera
Summary
When rotating the sensor
, AoLP map also rotates as if the AoLPs were the texture of the object. But in reality, the AoLPs will change when the poses of objects or the cameras change because the surface normals relative to the sensor are changed.
System configuration
System information:
OS: Ubuntu 20.04.5 LTS CPU: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz Python: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] CUDA: 11.6.55 LLVM: 12.0.0
Dr.Jit: 0.2.2 Mitsuba: 3.0.2 Is custom build? True Compiled with: Clang 10.0.0 Variants: scalar_rgb scalar_rgb_polarized llvm_ad_rgb llvm_rgb_polarized llvm_spectral_polarized scalar_spectral_polarized
Description
As the following gif shows, the AoLPs behave like textures, which is incorrect. I've checked the stokes alignment code in stokes.cpp
and it worked but still outputed these results.
Steps to reproduce
- I used the scene provided in the source code of mitsuba3 and just rotated the camera around forward direction.
import matplotlib.pyplot as plt
import numpy as np
import mitsuba as mi
mi.set_variant("scalar_spectral_polarized")
from mitsuba import ScalarTransform4f as T
path_scene = "mitsuba3/tutorials/scenes/cbox_pol.xml"
scene = mi.load_file(path_scene)
# rendering
num_angles = 20
list_angles = [ np.pi*2/num_angles*i for i in range(num_angles)]
for angle in list_angles:
plt.figure(figsize=(7,3))
sensor_new = mi.load_dict({
'type': 'perspective',
'fov_axis': 'x',
'principal_point_offset_x': 0.,
'principal_point_offset_y': 0.,
'fov': 39.3077,
'to_world': T.translate([0.5,0.5,4]) @ T.rotate(axis=[0,0,1], angle=angle/np.pi*180) \
@ T.rotate(axis=[0,1,0], angle=180),
'sampler': {
'type': 'independent',
'sample_count': 8
},
'film': {
'type': 'hdrfilm',
'width': 256,
'height': 256,
'rfilter': {
'type': 'gaussian',
},
'pixel_format': 'rgb',
},
'near_clip': 0.01,
'far_clip': 1000,
'focus_distance': 1000,
})
img = mi.render(scene, sensor=sensor_new)
bitmap = mi.Bitmap(img, channel_names=['R', 'G', 'B'] + scene.integrator().aov_names())
channels = dict(bitmap.split())
S2 = np.array(mi.TensorXf(channels['S2']))[...,0]
S1 = np.array(mi.TensorXf(channels['S1']))[...,0]
S0 = np.array(mi.TensorXf(channels['S0']))[...,0]
aop_mi = np.arctan2(-S2, S1)*0.5
aop_mi[aop_mi> np.pi/2] -= np.pi
aop_mi[aop_mi<-np.pi/2] += np.pi
plt.subplot(121)
plt.imshow(aop_mi/np.pi*180, cmap="hsv", vmin=-90, vmax=90)
img_vis = channels['<root>'].convert(component_format=mi.Struct.Type.UInt8, srgb_gamma=True)
plt.subplot(122)
plt.imshow(img_vis)
plt.savefig('img_angle{:.2f}.png'.format(angle/np.pi*180))
plt.close()
Hi, I'm not sure what is going on in your example here, but this "behaving like texture" thing isn't happening when I directly look at the output Stokes components. So maybe something in your AoLP conversion is wrong?
Example images:
camera angle = 0, S0 (intensity)
camera angle = 0, S1 (horizontal vs. vertical pol)
camera angle = 30deg, S0 (intensity)
camera angle = 30 degs, S1 (horizontal vs. vertical pol)
This are rendered via scalar_spectral_polarized
, directly via the command line.
E.g.
mitsuba -m scalar_spectral_polarized ../tutorials/scenes/cbox_pol.xml -Dspp=128
For the rotated image, this is the diff of the scene file:
diff --git a/scenes/cbox_pol.xml b/scenes/cbox_pol.xml
index 09fd007..38d89b0 100644
--- a/scenes/cbox_pol.xml
+++ b/scenes/cbox_pol.xml
@@ -13,6 +13,7 @@
<float name="focus_distance" value="1000"/>
<float name="fov" value="39.3077"/>
<transform name="to_world">
+ <rotate z="1" angle="30"/>
<lookat origin="0, 0, 4"
target="0, 0, 0"
up ="0, 1 0"/>
Hi @tizian ,
Thanks for the comment. I just reran the code and found if I rotated the camera as the following code shown, it gave me texture-like AoLP. But if I manually modified cbox_pol.xml
as you did, it output correct AoLP. I also compared the output stokes of different sensor
s and found that all the stokes were the same except the images were rotated.
It seems that there is some inconsistency of behaviors between python API and cpp. Maybe something is wrong in the python APIs.
sensor_new = mi.load_dict({
'type': 'perspective',
'fov_axis': 'x',
'principal_point_offset_x': 0.,
'principal_point_offset_y': 0.,
'fov': 39.3077,
'to_world':
T.look_at(
origin=[0, 0, 4.],
target=[0, 0, 0],
up=[0,1,0]) @ T.rotate(axis=[0,0,1], angle=angle/np.pi*180) ,
'sampler': {
'type': 'independent',
'sample_count': 128
},
'film': {
'type': 'hdrfilm',
'width': 256,
'height': 256,
'rfilter': {
'type': 'gaussian',
},
'pixel_format': 'rgb',
},
'near_clip': 0.01,
'far_clip': 1000,
'focus_distance': 1000,
})
# print("[START] rendering")
img = mi.render(scene, sensor=sensor_new)