vggt
vggt copied to clipboard
The problem of incorrect scale scaling in the 3D modeling
Hello: When generating 3D point clouds from several images using VGGT, the generated 3D point clouds (the colored part in the figure) are always smaller than the actual point cloud models collected by the depth camera. Could you please explain the reason for this? Is it due to incorrect parameter settings somewhere?
As the training data has been normalised, the output is also in a normalised scale, resulting in smaller overall values.
Hi our prediction is in a normalized space