Depth-Anything
Depth-Anything copied to clipboard
depth_anything.load_state_dict' is not recognized as an internal or external command, operable program or batch file.
Error occurred when executing DepthAnythingPreprocessor: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
So I downloaded these three: depth-anything-large, depth-anything-base, and depth-anything-small.
Where are these three files downloaded and stored? How upload the folder containing the checkpoints to your remote server? How load the model locally?
in the file "DepthAnything/checkpoints" -> There are multiple ways to try and load the different sets, I had problems with several ways that existed. It's hit and miss tbh.
From previous code when I was testing, I believe this works (I changed my code to attempt to get metric depth working, so don't quote me). You may have to go through their different examples and find the one that works for you. Also verify you have all the necessary imports.
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
#--
encoder = 'vits' # can also be 'vitb' or 'vitl'
if encoder == 'vits':
depth_anything = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")
elif encoder == 'vitb'
depth_anything = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-base-hf")
else:
depth_anything = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-large-hf")
Where are these three files downloaded and stored?
If you're using the run.py script, it uses huggingface downloads by default, which are stored in a directory like:
(user folder)/.cache/huggingface/hub/models--LiheYoung--depth_anything_vits14
On Windows, (user folder)
would be something like: C:\Users\username
, on Linux it's /home/username
and MacOS is something like: /Users/username
These files are not quite the same as the individual model files you linked to (though they are related).
How upload the folder containing the checkpoints to your remote server?
That depends a lot on how you access the remote server. The huggingface folder contains a bunch of files, so the simplest thing is probably to zip/compress the folder so it's a single file and upload that to the server (then decompress it on the server).
How load the model locally?
If you'd like to use the huggingface loader, you'd first need to run the run.py script with an active internet connect to download the .cache/huggingface/...
files. Once you have those, you should be able to run the models without an internet connection.
Alternatively, if you'd like to direct use the model files that you linked. You'd have to adjust the way the model is loaded slightly. There's an explanation on the repo home page, but the basic idea is to replace line 34 of run.py with something like:
from depth_anything.dpt import DPT_DINOv2
# Example to load vit-small from local file
model_file_path = "path/to/file/depth_anything_vits14.pth"
depth_anything = DPT_DINOv2("vits", features=64, out_channels=[48,96,192,384])
depth_anything.load_state_dict(torch.load(model_file_path))
depth_anything.to(DEVICE).eval()
# For vitb or vitl, use:
#depth_anything = DPT_DINOv2("vitb", features=128, out_channels=[96,192,384,768])
#depth_anything = DPT_DINOv2("vitl", features=256, out_channels=[256,512,1024,1024])
Thank @heyoeyo for the clear and detailed explanation! I think the issue has been well addressed. I will close it for now.