tandem icon indicating copy to clipboard operation
tandem copied to clipboard

Custom Dataset Issue

Open imoinuddin opened this issue 2 years ago • 1 comments

Team,

Thanks for the great and exciting work!

All,

I've been trying to get the code to work with my own custom dataset (images exported from video) but have been running into issues.

My setup is as follows: camera.txt

Pinhole 928.36203652 929.19195421 943.47243438 544.41549704 0
1080 1920
crop
1080 1920

Dataset PNG image format details: 000001.png: PNG image data, 1080 x 1920, 8-bit grayscale, non-interlaced

When running tandem_dataset, I get the following error:

RUNNING --- results/tracking/dense/custom/custom_01/0
[W BinaryOps.cpp:467] Warning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (function operator())
terminate called after throwing an instance of 'std::runtime_error'
  what():  The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/models/cas_mvsnet/___torch_mangle_296.py", line 515, in forward
    input29 = torch._convolution(input28, CONSTANTS.c54, None, [2, 2, 2], [1, 1, 1], [1, 1, 1], True, [1, 1, 1], 1, False, False, True, True)
    input30 = torch.batch_norm(input29, CONSTANTS.c55, CONSTANTS.c56, CONSTANTS.c57, CONSTANTS.c58, False, 0.10000000000000001, 1.0000000000000001e-05, True)
    input31 = torch.add(_264, torch.relu_(input30))
              ~~~~~~~~~ <--- HERE
    input32 = torch._convolution(input31, CONSTANTS.c59, None, [2, 2, 2], [1, 1, 1], [1, 1, 1], True, [1, 1, 1], 1, False, False, True, True)
    input33 = torch.batch_norm(input32, CONSTANTS.c60, CONSTANTS.c61, CONSTANTS.c62, CONSTANTS.c63, False, 0.10000000000000001, 1.0000000000000001e-05, True)

Traceback of TorchScript, original code (most recent call last):
/home/lukas/dr_mvsnet_new/models/module.py(633): forward
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/nn/modules/module.py(1039): _slow_forward
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/nn/modules/module.py(1051): _call_impl
/home/lukas/dr_mvsnet_new/models/module.py(1291): depth_prediction
/home/lukas/dr_mvsnet_new/models/cas_mvsnet.py(376): forward
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/nn/modules/module.py(1039): _slow_forward
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/nn/modules/module.py(1051): _call_impl
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/jit/_trace.py(952): trace_module
/home/lukas/miniconda3/envs/t1.9/lib/python3.8/site-packages/torch/jit/_trace.py(735): trace
export_model.py(204): main
export_model.py(230): <module>
RuntimeError: The size of tensor a (135) must match the size of tensor b (136) at non-singleton dimension 4

Is there something obvious that I might be missing?

Thanks!

imoinuddin avatar Jan 26 '22 12:01 imoinuddin

Dear @imoinuddin,

sorry for taking a bit long to reply because I am currently quite busy.

Mhm, this looks a bit like a size problem, maybe the number of channels or the image size isn't correct. You do have to export the Python model for the exact size and I just pushed the cva_mvsnet/export_model.py script to do this (see the README). Additionally, the image size needs to be divisible by 32 in each dimension.

Best Lukas

lkskstlr avatar Feb 11 '22 17:02 lkskstlr