Fu-En Wang
Fu-En Wang
@gateway Because our training images are all fixed as 1024x512, so if you apply to other resolution I cannot promise the performance will be good. In you case, I guess...
The result of the google search is inverse depth. You can just take the inverse of the depth map and get similar result.
@gateway You can just read my output npy file and modify the depth as  You can get similar results like 
@gateway The reason why we need to crop the prediction is that the training data of Matterport3D has no depth ground truth on the top and bottom area. So the...
@gateway Because the vertical FoV of depth sensors in Matterport camera cannot reach 180 degree, so the upper/lower area are invalid pixels. During training, we won't calculate the loss on...
Maybe you can follow my another repo for cubemap convertion: https://github.com/fuenwang/PanoramaUtility/blob/master/Utils/Equirec2Cube.py#L29 In line 16 - 29,I define the rotation for different cube faces and this is related to E(?, ?,...
Yes, because if we directly derive xyz of cubemap from equirectangular, we cannot obtain one-to-one mapping pixel on cubemap. This means we will get a cubemap with black hole in...
Hi: I have written a quick explanation of equirectangular projection or spherical projection in my previous paper. https://arxiv.org/abs/1811.05304 You can see the chapter 3 first. If you have something not...
Hi, FoV is the field of view of perspective image converted from equirectangular image. THETA and PHI are the longitude and latitude. In here you can treat the two parameters...
There is no unit for RADIUS because we don't have any depth information to know the scale.