Parameters to render SRN dataset
Hi! Could you share some key parameters (listed below) used to render the same Chairs/Cars dataset presented in SRN paper?
1. sphere radius (distance to object)
2. light properties (like how many, direction, intensity, etc.)
3. other relevant parameters if necessary
This is also what I'm looking for! Especially for the scaling parameters/settings.
Done some tedious research to align shapenet v2 models with original SRN datasets, but there are some weird scaling issues that I can not make right. Need to know the original scaling parameter of SRN chairs_train datasets.
-
I'm aware that when the models are imported into blender's scene, their bounding boxes are centered at the origin.
-
The alignment only works fine for renderings using the scripts and parameters provided in the repo, but not for original SRN datasets' renderings.
-
Below are some example results:
-
:heavy_check_mark: Render chairs using parameters written in the scripts & align with shapenet v2 model (pointclouds)

-
:x: Align shapenet v2 model (pointclouds) with SRN chairs_train

-
:x: Align shapenet v2 model (pointclouds) with SRN chairs_train_2.0

@psguo As for the sphere radius, it can be infered from the poses.
For example, for chairs_train, np.linalg.norm(c2w[...,:3,3], axis=-1) is constant at 1.3, so the sphere radius is 1.3.
And similarly for chairs_train_2.0, the sphere radius is 2.0
UPDATE: I've found how the shapenet models are scaled to render the original SRN datasets!!!!!!!
-
Quick answer: after the models are imported, I believe Dr. Sitzmann scales them again so that the maximum value of the bounding boxes size equals 1.
-
Now the shapenet models are perfectly aligned with the SRN renderings.
-
Detailed steps to make shapenet v2's normalized models to be aligned with SRN's renderings dataset:
- ('model' stands for vertices or pointclouds of the model)
- First, offset the model, so that the center of the bounding box is at the origin. This could be done using the
model_normalized.jsonprovided in the shapenet v2 dataset, with:model = model + (centroid - center) / norm(diag), where:-
centroidstands for the mean point of the model's vertices, and can be directly read from the json -
diagstands for the diagonal vector of the bounding box and can be calculated with 'max'-'min' from the json -
centerstands for the center of the bounding box, and can be calculated with ('max'+'min')/2 from the json
-
- Secondly, rescale the model, so that the max value of the bouding box size equals 1.
- model =
model / ( max(diag) / norm(diag) )
- model =
- Finally, rotate the model, with the left rotation matrix:
np.array([ [1, 0, 0], [0, 0, -1], [0, 1, 0] ])
-
Below are some example results:

I found it out by tring to think in his situation about how he would scale the models to make the renderings better to train.
For the lighting parameters. I changed them around to get something similar to SRN at least for the cars category:
lamp1 = bpy.data.lamps['Lamp']
lamp1.type = 'SUN'
lamp1.shadow_method = 'NOSHADOW'
lamp1.use_specular = False
lamp1.energy = 1.33
lamp1_obj = bpy.data.objects['Lamp']
lamp1_obj.rotation_euler = (math.radians(90), 0, math.radians(180)) # pointing from the top
bpy.ops.object.lamp_add(type='SUN')
lamp2 = bpy.data.lamps['Sun']
lamp2.shadow_method = 'NOSHADOW'
lamp2.use_specular = False
lamp2.energy = 0.1
bpy.data.objects['Sun'].rotation_euler = (math.radians(0), 0, 0)
bpy.ops.object.lamp_add(type='SUN')
lamp2 = bpy.data.lamps['Sun.001']
lamp2.shadow_method = 'NOSHADOW'
lamp2.use_specular = False
lamp2.energy = 0.1
bpy.data.objects['Sun.001'].rotation_euler = (math.radians(0), 90, 0)
bpy.ops.object.lamp_add(type='SUN')
lamp2 = bpy.data.lamps['Sun.002']
lamp2.shadow_method = 'NOSHADOW'
lamp2.use_specular = False
lamp2.energy = 0.1
bpy.data.objects['Sun.002'].rotation_euler = (math.radians(0), 180, 0)
bpy.ops.object.lamp_add(type='SUN')
lamp2 = bpy.data.lamps['Sun.003']
lamp2.shadow_method = 'NOSHADOW'
lamp2.use_specular = False
lamp2.energy = 0.1
bpy.data.objects['Sun.003'].rotation_euler = (math.radians(0),270, 0)