RoMa icon indicating copy to clipboard operation
RoMa copied to clipboard

Error demo_match.py

Open IceIce1ce opened this issue 1 month ago • 3 comments

Hi author, thanks for your great code. When I run your demo script, an error occurred. Can you check your code again?

2025-11-21 21:55:56.728 | INFO | romatch.models.model_zoo.roma_models:roma_model:61 - Using coarse resolution (560, 560), and upsample res (864, 1152) Traceback (most recent call last): File "/home/vsw/Desktop/RoMa/demo/demo_match.py", line 41, in im2_transfer_rgb = F.grid_sample( ^^^^^^^^^^^^^^ File "/home/vsw/miniconda3/envs/map/lib/python3.12/site-packages/torch/nn/functional.py", line 5108, in grid_sample return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: grid_sampler(): expected grid to have size 2 in last dimension, but got grid with sizes [1, 1, 864, 2302, 4]

IceIce1ce avatar Nov 21 '25 12:11 IceIce1ce

Ah, because it's batched. Could you try if roma_model.match(im1_path, im2_path, device=device, batched = False) fixes the error?

Parskatt avatar Nov 21 '25 13:11 Parskatt

Hi author, thanks for your reply. In demo_3D_effect.py file, I found that there is also a smilar error.

odel:61 - Using coarse resolution (560, 560), and upsample res (864, 1152) Traceback (most recent call last): File "/home/vsw/Desktop/RoMa/demo/demo_3D_effect.py", line 44, in im2_transfer_rgb = F.grid_sample( ^^^^^^^^^^^^^^ File "/home/vsw/miniconda3/envs/roma/lib/python3.12/site-packages/torch/nn/functional.py", line 5023, in grid_sample return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: grid_sampler(): expected 4D input and grid with same number of dimensions, but got input with sizes [1, 3, 864, 1152] and grid with sizes [1, 1, 864, 1152, 2]

IceIce1ce avatar Nov 22 '25 08:11 IceIce1ce

For the demo_match.py, after changing batched to False, another problem occurred:

odel:61 - Using coarse resolution (560, 560), and upsample res (864, 1152) Traceback (most recent call last): File "/home/vsw/Desktop/RoMa/demo/demo_match.py", line 36, in warp, certainty = roma_model.match(im1_path, im2_path, device=device, batched=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vsw/miniconda3/envs/roma/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/vsw/Desktop/RoMa/romatch/models/matcher.py", line 792, in match raise ValueError("batched must be True, non-batched inference is no longer supported.") ValueError: batched must be True, non-batched inference is no longer supported.

IceIce1ce avatar Nov 22 '25 08:11 IceIce1ce