Depth-Anything
Depth-Anything copied to clipboard
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
Hi! When I try to run a file app.py or run_video.py, I'm getting this error:
Loading weights from local directory
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\gradio\queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\gradio\blocks.py", line 1561, in process_api
result = await self.call_function(
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\gradio\blocks.py", line 1179, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
result = context.run(func, *args)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "app.py", line 75, in on_submit
depth = predict_depth(model, image)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "app.py", line 52, in predict_depth
return model(image)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\Depth-Anything\depth_anything\dpt.py", line 158, in forward
features = self.pretrained.get_intermediate_layers(x, 4, return_class_token=True)
File "D:\Apps\AiApps\xdrive\DepthGen\Depth-Anything\torchhub/facebookresearch_dinov2_main\vision_transformer.py", line 308, in get_intermediate_layers
outputs = self._get_intermediate_layers_not_chunked(x, n)
File "D:\Apps\AiApps\xdrive\DepthGen\Depth-Anything\torchhub/facebookresearch_dinov2_main\vision_transformer.py", line 272, in _get_intermediate_layers_not_chunked
x = self.prepare_tokens_with_masks(x)
File "D:\Apps\AiApps\xdrive\DepthGen\Depth-Anything\torchhub/facebookresearch_dinov2_main\vision_transformer.py", line 214, in prepare_tokens_with_masks
x = self.patch_embed(x)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\Depth-Anything\torchhub/facebookresearch_dinov2_main\dinov2\layers\patch_embed.py", line 76, in forward
x = self.proj(x) # B C H W
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Apps\AiApps\xdrive\DepthGen\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
Windows 10, cuda 11
Is your model successfully loaded to GPU in this line? Is the DEVICE
outputs cuda
?
Is your model successfully loaded to GPU in this line? Is the
DEVICE
outputscuda
?
Yes, I downloaded model and config file, and models loaded locally with this command:
model = DepthAnything.from_pretrained('checkpoints/depth_anything_vitb14', local_files_only=True)
and with commands:
print(torch.cuda.is_available())
print(torch.cuda.current_device())
I got:
True
0
I have RTX 3070 as 0 cuda device
I found the problem, for local model this code work:
depth_anything = DepthAnything.from_pretrained('checkpoints/depth_anything_vitb14', local_files_only=True).to(DEVICE).eval()
Please update manual, for local models ".to(DEVICE).eval()" parameter at the end of command needed