large-lightfields-dataset
large-lightfields-dataset copied to clipboard
Panoramic images
I would like to ask how to use the given json file in panoramic pictures to obtain the internal and external parameters of the camera.
Hi! I am not sure I fully understand what you are asking about. The JSON format is described in the README of image-lens-reproject. However, as a full panoramic picture doesn't have a clearly-defined optical-axis (i.e., the center of the image), we did not include them.
I did, however, add support for panoramic pictures recently in the lens-reproject tool, but it's right now only accessible through the command line options of the tool, and not through the JSON. Pay attention to these flags (you will need --no-configs
, as there is no support in the JSON configs for this yet):
--no-configs width,height
Work without reading and writing config
files. Requires you to specify the input
lens through the input-optics flags
(staring with -i-...) and the expected
resolution of the input images here.
--i-equirectangular long_min,long_max,lat_min,lat_max (radians)
Input equirectangular images with given
longitude min,max and latitude min,max
value or 'full'.
--equirectangular longitude_min,longitude_max,latitude_min,latitude_max
Output equirectangular images with given
longitude min,max and latitude min,max
value or 'full'.
--rotation pan, pitch, roll (degrees)
Specify a rotation (default: 0.0)
If you are talking about the generated JSON file for when you use a "PANORAMIC" lens type in Blender, those are implemented here:
https://github.com/IDLabMedia/blender-lightfield-addon/blob/254fc6b8551255ed67b8be1ed051b0ea6984701b/config.py#L46-L60
I want to apply this panoramic image to NeRF, so I need to get its observation angle and coordinate origin (obtained from internal and external parameters) based on the json file. I would like to ask if you have any relevant code. I am not very clear about the conversion of panoramic image pixels to world coordinate system. Thank you so much
You seem to be asking the same as #5, which is not possible with NeRF last time I checked (long ago, so maybe they added support for 360° images by now).