Padim inference on custom dataset throws Value Error
Describe the bug I trained padim on custom datset for classification purpose. The model trained well but i cannot infer using inference.py. When i run inference.py, it throws a value error as follows: ValueError: could not broadcast input array from shape (14,195,3) into shape (14,108,3)
To Reproduce Steps to reproduce the behavior:
- Train a custom padim classifier on custom dataset.
- run tools/inference.py
- See error
Expected behavior Should work flawlessly the way it works in segmentation
Screenshots
- If applicable, add screenshots to help explain your problem.
Hardware and Software Configuration Google Colab
Additional context
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AdaptiveThreshold). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AnomalyScoreDistribution). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (MinMax). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "tools/inference.py", line 170, in
Hi @shrinand1996, can you share your config file so we could have a look at the details.
@shrinand1996, can you check this with the new inferencer we recently introduced?
Closing due to inactivity. Feel free to reopen if the problem persists.