dynibar
dynibar copied to clipboard
How to use virtual images?
Why do monocular datasets use virtual images while nvidia datasets don't? What is the difference between these datasets? And we find that virtual images are crucial to the kid-running case, which is not mentioned in the paper. It would be of great help if you can solve my question.
Hi, we described virtual source views in section 4 of the paper and basically virtual views provides stronger geometry information support for moving objects that prevents model stuck in bad local minimal. We believe one reasons is that the camera-object motions relations from real monocular videos offer more ambiguous cues for moving objects when doing volumetric feature aggregation compared with the camera-object motions existing in the Nvidia dataset.
how to produce the video using virtual views, and could you share codes with me?