MONAI icon indicating copy to clipboard operation
MONAI copied to clipboard

`Invertd` Confused by Invertible Transforms in Postprocessing

Open ericspod opened this issue 10 months ago • 2 comments

Describe the bug The Invertd class relies on a list of applied transforms stored in dictionaries or MetaTensor instances to know which transforms to invert and with what parameters. This gets confused if the postprocessing sequence of transforms contains an invertible transform before this class is used.

To Reproduce Run the following with the current version of MONAI:

from monai.data import MetaTensor, create_test_image_2d
from monai.transforms import Compose, EnsureChannelFirstd, Invertd, Spacingd
from monai.transforms.utility.dictionary import Lambdad

img, _ = create_test_image_2d(60, 60, 2, 10, num_seg_classes=2)
img = MetaTensor(img, meta={"original_channel_dim": float("nan"), "pixdim": [1.0, 1.0]})
key = "image"
preprocessing = Compose([
        EnsureChannelFirstd(key),
        Spacingd(key, pixdim=[2.0, 2.0]),
])

postprocessing = Compose([
    Lambdad(key, func=lambda x: x),  # adds something to the applied operations of img
    Invertd(key, transform=preprocessing, orig_keys=key)  # gets confused by this item
])

item = {key: img}
pre = preprocessing(item)
post = postprocessing(pre)  # exception raised here

This raises the following exception:

Traceback (most recent call last):
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/transform.py", line 150, in apply_transform
    return _apply_transform(transform, data, unpack_items, lazy, overrides, log_stats)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/transform.py", line 98, in _apply_transform
    return transform(data, lazy=lazy) if isinstance(transform, LazyTrait) else transform(data)
                                                                               ^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/spatial/dictionary.py", line 530, in inverse
    d[key] = self.spacing_transform.inverse(cast(torch.Tensor, d[key]))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/spatial/array.py", line 546, in inverse
    return self.sp_resample.inverse(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/spatial/array.py", line 239, in inverse
    transform = self.pop_transform(data)
                ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/inverse.py", line 348, in pop_transform
    return self.get_most_recent_transform(data, key, check, pop=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/inverse.py", line 330, in get_most_recent_transform
    self.check_transforms_match(all_transforms[idx])
  File "/home/localek10/workspace/monai/MONAI_mine/monai/transforms/inverse.py", line 287, in check_transforms_match
    raise RuntimeError(
RuntimeError: Error SpatialResample getting the most recently applied invertible transform Lambda 129323001074736 != 129322998584224.

Expected behavior The Invertd should only try to invert those transforms in preprocessing and ignore the Lambdad.

Environment

Ensuring you use the relevant python executable, please paste the output of:

================================
Printing MONAI config...
================================
MONAI version: 1.4.1rc1+30.g7c26e5af.dirty
Numpy version: 1.26.4
Pytorch version: 2.6.0+cu124
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 7c26e5af385eb5f7a813fa405c6f3fc87b7511fa
MONAI __file__: /home/<username>/workspace/monai/MONAI_mine/monai/__init__.py

Optional dependencies:
Pytorch Ignite version: 0.4.11
ITK version: 5.4.2
Nibabel version: 5.3.2
scikit-image version: 0.25.2
scipy version: 1.15.2
Pillow version: 11.1.0
Tensorboard version: 2.19.0
gdown version: 5.2.0
TorchVision version: 0.21.0+cu124
tqdm version: 4.67.1
lmdb version: 1.6.2
psutil version: 7.0.0
pandas version: 2.2.3
einops version: 0.8.1
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: 2.20.2
pynrrd version: 1.1.3
clearml version: 1.17.2rc0

For details about installing the optional dependencies, please visit:
    https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies


================================
Printing system config...
================================
System: Linux
Linux version: Ubuntu 24.04.2 LTS
Platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
Processor: x86_64
Machine: x86_64
Python version: 3.12.9
Process name: python
Command: ['python', '-c', 'import monai; monai.config.print_debug_info()']
Open files: [popenfile(path='/home/localek10/.vscode-server/data/logs/20250320T125602/remoteTelemetry.log', fd=19, position=0, mode='a', flags=33793), popenfile(path='/home/localek10/.vscode-server/data/logs/20250320T125602/ptyhost.log', fd=20, position=2102, mode='a', flags=33793), popenfile(path='/home/localek10/.vscode-server/data/logs/20250320T125602/remoteagent.log', fd=22, position=491, mode='a', flags=33793)]
Num physical CPUs: 8
Num logical CPUs: 16
Num usable CPUs: 16
CPU usage (%): [3.4, 3.0, 3.4, 3.9, 3.4, 3.4, 6.4, 31.6, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.8, 70.9]
CPU freq. (MHz): 1653
Load avg. in last 1, 5, 15 mins (%): [1.5, 1.2, 0.8]
Disk usage (%): 10.6
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 62.7
Available memory (GB): 51.7
Used memory (GB): 10.0

================================
Printing GPU config...
================================
Num GPUs: 2
Has CUDA: True
CUDA version: 12.4
cuDNN enabled: True
NVIDIA_TF32_OVERRIDE: None
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE: None
cuDNN version: 90100
Current device: 0
Library compiled for CUDA architectures: ['sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90']
GPU 0 Name: NVIDIA TITAN X (Pascal)
GPU 0 Is integrated: False
GPU 0 Is multi GPU board: False
GPU 0 Multi processor count: 28
GPU 0 Total memory (GB): 11.9
GPU 0 CUDA capability (maj.min): 6.1
GPU 1 Name: NVIDIA GeForce GTX 980
GPU 1 Is integrated: False
GPU 1 Is multi GPU board: False
GPU 1 Multi processor count: 16
GPU 1 Total memory (GB): 3.9
GPU 1 CUDA capability (maj.min): 5.2

Additional context This is a similar issue to that raised in #8056 but is unrelated to threading.

ericspod avatar Mar 20 '25 23:03 ericspod

HIi @ericspod , Currently Invertd can get confused when the same key goes through multiple stages (pre/post-processing), since it relies on the order in applied_operations.

Using the case you provided, the current trace looks like:

applied_operations=[
  {"class": "EnsureChannelFirstd"},
  {"class": "Spacingd"},
  {"class": "Lambdad"},
  {"class": "Invertd"},
]

Proposed idea(with group labels):

applied_operations = [
  {"class": "EnsureChannelFirstd", "group": "preprocessing"},
  {"class": "Spacingd", "group": "preprocessing"},
  {"class": "Lambdad", "group": "postprocessing"},
  {"class": "Invertd", "group": "postprocessing"},
]

With this structure, Invertd(transform=preprocessing) can simply look up the "preprocessing" group and only invert that part, ignoring any unrelated operations from the postprocessing stage.Would this approach make sense to you?

IamTingTing avatar Aug 17 '25 14:08 IamTingTing

With this structure, Invertd(transform=preprocessing) can simply look up the "preprocessing" group and only invert that part, ignoring any unrelated operations from the postprocessing stage.Would this approach make sense to you?

Hi @IamTingTing this might work for the problem here though we would have to manage complexity. Do you mean to make "group" something that the user would have to specify? This would break a lot of things so would have to be optional. I wonder if it's possible to automatically set a group value whenever Compose is applied which can identify which transforms to invert, this may be a hash based on something Invertd can automatically figure out as well. We can discuss this further but feel free to propose a PR for this and we can look at what actual code would be. Thanks!

ericspod avatar Sep 08 '25 23:09 ericspod