dinov2
dinov2 copied to clipboard
pca visualization,use DINOV2 PCA demo on huggingface and got error?who can help me
i trained dinov2_vitl14 with custome data that add class on imagenet-1k,and use eval/teacher_checkpoint.pth,and run DINOV2 PCA demo on huggingface,got below:Traceback (most recent call last):
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/gradio/routes.py", line 516, in predict
output = await route_utils.call_process_api(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1437, in process_api
result = await self.call_function(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/gradio/utils.py", line 650, in wrapper
response = f(*args, **kwargs)
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/app.py", line 59, in query_image
features_dict = model.forward_features(imgs_tensor)
File "/home/data-lv/luolu/PycharmProjects/dinov2/dinov2/models/vision_transformer.py", line 227, in forward_features
x = blk(x)
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/data-lv/luolu/PycharmProjects/dinov2/dinov2/layers/block.py", line 254, in forward
return super().forward(x_or_x_list)
File "/home/data-lv/luolu/PycharmProjects/dinov2/dinov2/layers/block.py", line 112, in forward
x = x + attn_residual_func(x)
File "/home/data-lv/luolu/PycharmProjects/dinov2/dinov2/layers/block.py", line 91, in attn_residual_func
return self.ls1(self.attn(self.norm1(x)))
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/data-lv/luolu/PycharmProjects/dinov2/dinov2/layers/attention.py", line 84, in forward
x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 193, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 291, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 307, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 96, in _dispatch_fw
return _run_priority_list(
File "/home/data-lv/luolu/PycharmProjects/dinov2-pca-master/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward
with inputs:
query : shape=(7, 40001, 16, 64) (torch.float32)
key : shape=(7, 40001, 16, 64) (torch.float32)
value : shape=(7, 40001, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF
is not supported because:
device=cpu (supported: {'cuda'})
attn_bias type is <class 'NoneType'>
[email protected]
is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
tritonflashattF
is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
operator wasn't built - see python -m xformers.info
for more info
triton is not available
cutlassF
is not supported because:
device=cpu (supported: {'cuda'})
smallkF
is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 64