Yao Yao
Yao Yao
@KevinCain I think I might know where is the problem... The function "UNetDS2GN" is hard coded that the feature map is downsized by 4 compare with the original input. So...
Hi, the depth map fusion and refinement step are quite important for T&T benchmarking. I would suggest you implement the depth map fusion step exactly as described in MVSNet paper...
It shouldn't be... I was using a V100 GPU (16GB) on google ml platform to do the experiments. But also I found tensorflow GPU usage is somewhat unpredictable when using...
Hi @pxu4114 Are you able to compile the original fusibile (https://github.com/kysucix/fusibile)? Yao
Hi, I am not sure whether it is a bug now. Actually I did not test the code with batch size > 1... What I expect is that we should...
Hi Ignasi, You can download the original resolution depth maps at: https://drive.google.com/open?id=1LVy8tsWajG3uPTCYPSxDvVXFCdIYXaS-
I directly use the test.py to generate the depth map. And use visibility+average fusion to fuse the depth maps. The Gipuma fusion since not work well on ETH3D scenes. I...
There is no GT depth map unless you use the laser scan to acquire the ground truth point cloud.
Depth maps are rendered from ground truth meshes, which is generated from the DTU provided ground truth point could using the screened Poisson surface reconstruction ([SPSR](http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version8.0/)). SPSR parameters are also...
@tatsy The depth map is generated b direct mesh rendering without external alignment.