Depth-Anything
Depth-Anything copied to clipboard
Are the depth scaling and translation parameters stable in a video ?
Has somebody tried to see if the scaling and translation parameters (related to the affine invariant depth) are stable accross a video, at least if there is no notable change in the range of depth that is visible in the video ?
Based on the video samples on their project page, the depth result is still visibly flickery on videos (and therefore the scaling/shift parameters would be changing), though it seems more stable than the original midas models.
However, I don't think the model is optimized for stability on videos. If you need that, you might be better off with something like Consistent Depth of Moving Objects in Video.