Prevent implicit conversion of metatensor to numpy array
Fixes #8619.
Description
As detailed in #8619, the convert_to_channel_last method of the ImageWriter class implicitly converts data from a MetaTensor to an ndarray here. This PR addresses this issue by calling the squeeze method on data rather than using np.squeeze. A unit test has been added to test this change.
Types of changes
- [ ] Non-breaking change (fix or new feature that would not break existing functionality).
- [x] Breaking change (fix or new feature that would cause existing functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running
./runtests.sh -f -u --net --coverage. - [ ] Quick tests passed locally by running
./runtests.sh --quick --unittests --disttests. - [ ] In-line docstrings updated.
- [ ] Documentation updated, tested
make htmlcommand in thedocs/folder.
Notes for reviewers/maintainers:
- When running
./runtests.sh -f -u --net --coverage, I'm getting several flake8 linting errors in files not touched by this commit. These errors pre-exist the current PR. They are not addressed using the--autofix flag. Since they seem to be unrelated to this PR, I have left them as is, but also have not checked the box above since the tests did not all pass. Reproducing here for reference:
Click here to show errors.
flake8
7.3.0 (flake8-bugbear: 24.2.6, flake8-comprehensions: 3.17.0, mccabe: 0.7.0, pep8-naming: 0.15.1, pycodestyle: 2.14.0, pyflakes: 3.4.0) CPython 3.12.3 on Linux
/home/davis/Developer/MONAI/monai/apps/auto3dseg/bundle_gen.py:399:69: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/networks/retinanet_detector.py:790:102: E231 missing whitespace after ','
/home/davis/Developer/MONAI/monai/apps/detection/utils/anchor_utils.py:177:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/anchor_utils.py:183:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/detector_utils.py:94:88: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/detector_utils.py:98:75: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:288:58: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:369:69: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:538:58: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:916:47: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:918:47: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/data/wsi_reader.py:213:80: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/losses/unified_focal_loss.py:220:91: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/metrics/meandice.py:163:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/networks/blocks/patchembedding.py:105:30: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/networks/layers/factories.py:98:51: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/transforms/croppad/array.py:293:87: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:72: E251 unexpected spaces around keyword / parameter equals
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:74: E202 whitespace before '}'
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:74: E251 unexpected spaces around keyword / parameter equals
1 E202 whitespace before '}'
3 E225 missing whitespace around operator
13 E226 missing whitespace around arithmetic operator
1 E231 missing whitespace after ','
2 E251 unexpected spaces around keyword / parameter equals
20
Check failed!
Please run auto style fixes: ./runtests.sh --autofix
- When running
./runtests.sh --quick --unittests --disttests, I'm getting an error relating totests/networks/test_bundle_onnx_export.py. I get the same error when I runpython -m tests.networks.test_onnx_exporton thedevbranch. Since this error seems to be unrelated to this PR, I have left it as is, but also did not check the box above since the tests did not all pass. Reproducing here for reference:
Click here to show errors.
======================================================================
ERROR: test_onnx_export_1_False (tests.networks.test_bundle_onnx_export.TestONNXExport.test_onnx_export_1_False)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/davis/Developer/MONAI/tests/test_utils.py", line 840, in command_line_tests
normal_out = subprocess.run(cmd, env=test_env, check=True, capture_output=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python', '-m', 'monai.bundle', 'onnx_export', 'network_def', '--filepath', '/tmp/tmperf9v3hy/model.onnx', '--meta_file', '/home/davis/Developer/MONAI/tests/testing_data/metadata.json', '--config_file', "['/home/davis/Developer/MONAI/tests/testing_data/inference.json','/tmp/tmperf9v3hy/def_args.yaml']", '--ckpt_file', '/tmp/tmperf9v3hy/model.pt', '--args_file', '/tmp/tmperf9v3hy/def_args.yaml', '--input_shape', '[1, 1, 96, 96, 96]', '--use_trace', 'False']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/parameterized/parameterized.py", line 620, in standalone_func
return func(*(a + p.args), **p.kwargs, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/tests/networks/test_bundle_onnx_export.py", line 65, in test_onnx_export
command_line_tests(cmd)
File "/home/davis/Developer/MONAI/tests/test_utils.py", line 846, in command_line_tests
raise RuntimeError(f"subprocess call error {e.returncode}: {errors}, {output}") from e
RuntimeError: subprocess call error 1: b'Traceback (most recent call last):
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 118, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 210, in _capture
return torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/export/__init__.py", line 270, in export
raise ValueError(
ValueError: Exporting a ScriptModule is not supported. Maybe try converting your ScriptModule to an ExportedProgram using `TS2EPConverter(mod, args, kwargs).convert()` instead.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/davis/Developer/MONAI/monai/bundle/__main__.py", line 31, in <module>
fire.Fire()
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/monai/bundle/scripts.py", line 1426, in onnx_export
_export(
File "/home/davis/Developer/MONAI/monai/bundle/scripts.py", line 1302, in _export
net = converter(model=net, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/monai/networks/utils.py", line 735, in convert_to_onnx
torch.onnx.export(
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/__init__.py", line 296, in export
return _compat.export_compat(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py", line 143, in export_compat
onnx_program = _core.export(
^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_flags.py", line 23, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1385, in export
raise _errors.TorchExportError(
torch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. \x1b[96mThis is step 1/3\x1b[0m of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the \x1b[96m*torch.export*\x1b[0m component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class \'ValueError\'>: Exporting a ScriptModule is not supported. Maybe try converting your ScriptModule to an ExportedProgram using `TS2EPConverter(mod, args, kwargs).convert()` instead.
(Refer to the full stack trace above for more information.)
', b"2025-12-04 09:34:48,922 - INFO - --- input summary of monai.bundle.scripts.onnx_export ---
2025-12-04 09:34:48,922 - INFO - > meta_file: '/home/davis/Developer/MONAI/tests/testing_data/metadata.json'
2025-12-04 09:34:48,922 - INFO - > net_id: 'network_def'
2025-12-04 09:34:48,922 - INFO - > filepath: '/tmp/tmperf9v3hy/model.onnx'
2025-12-04 09:34:48,922 - INFO - > config_file: ['/home/davis/Developer/MONAI/tests/testing_data/inference.json',
'/tmp/tmperf9v3hy/def_args.yaml']
2025-12-04 09:34:48,922 - INFO - > ckpt_file: '/tmp/tmperf9v3hy/model.pt'
2025-12-04 09:34:48,922 - INFO - > use_trace: False
2025-12-04 09:34:48,923 - INFO - > input_shape: [1, 1, 96, 96, 96]
2025-12-04 09:34:48,923 - INFO - ---
torch_versioned_kwargs={}
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`... \xe2\x9d\x8c
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=True)`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=True)`... \xe2\x9d\x8c
- When running
make html, I get an error related to indentation inmeta_tensor.py. Since this error seems to be unrelated to this PR, I have left it as is, but also did not check the box above since the documentation did not build. Reproducing here for reference:
Click here to show errors.
/home/davis/Developer/MONAI/monai/data/meta_tensor.py:docstring of monai.data.MetaTensor.nonzero_static:24: ERROR: Unexpected indentation. [docutils]
/home/davis/Developer/MONAI/monai/data/meta_tensor.py:docstring of monai.data.MetaTensor.nonzero_static:33: ERROR: Unexpected indentation. [docutils]
Extension error (sphinx.ext.linkcode)!
Versions
========
* Platform: linux; (Linux-6.9.3-76060903-generic-x86_64-with-glibc2.39)
* Python version: 3.12.3 (CPython)
* Sphinx version: 9.0.4
* Docutils version: 0.22.3
* Jinja2 version: 3.1.6
* Pygments version: 2.19.2
Last Messages
=============
config_syntax
reading sources... [ 48%]
contrib
reading sources... [ 50%]
data
Loaded Extensions
=================
* sphinx.ext.mathjax (9.0.4)
* alabaster (1.0.0)
* sphinxcontrib.applehelp (2.0.0)
* sphinxcontrib.devhelp (2.0.0)
* sphinxcontrib.htmlhelp (2.1.0)
* sphinxcontrib.serializinghtml (2.0.0)
* sphinxcontrib.qthelp (2.0.0)
* recommonmark (0.6.0)
* sphinx.ext.intersphinx (9.0.4)
* sphinx.ext.autodoc (9.0.4)
* sphinx.ext.napoleon (9.0.4)
* sphinx.ext.linkcode (9.0.4)
* sphinx.ext.autosectionlabel (9.0.4)
* sphinx.ext.autosummary (9.0.4)
* sphinx_autodoc_typehints (unknown version)
* pydata_sphinx_theme (unknown version)
Traceback
=========
File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/sphinx/events.py", line 452, in emit
raise ExtensionError(
sphinx.errors.ExtensionError: Handler <function doctree_read at 0x7beb85b16980> for event 'doctree-read' threw an exception (exception: Empty module name)
The full traceback has been saved in:
/tmp/sphinx-err-s5bsuit5.log
To report this error to the developers, please open an issue at <https://github.com/sphinx-doc/sphinx/issues/>. Thanks!
Please also report this if it was a user error, so that a better error message can be provided next time.
make: *** [Makefile:24: html] Error 2
- Since this PR is a bugfix and brings actual behavior in line with the documented behavior, I have not updated the in-line docstrings. However, I am of course open to feedback as to the best practice here.
Thank you to the maintainers for their work on this invaluable project and thank you in advance for considering this PR!
Walkthrough
Two files changed. In monai/data/image_writer.py two identical call sites replaced np.squeeze(data, -1) with the array method data.squeeze(-1) in convert_to_channel_last. In tests/data/test_itk_writer.py tests were updated to use Torch tensors and import MetaTensor; a new test test_metatensor_preserved was added to assert that passing a MetaTensor with channel_dim=-1 and squeeze_end_dims=True leaves the internal data_obj as a MetaTensor and preserves its metadata.
Estimated code review effort
🎯 2 (Simple) | ⏱️ ~12 minutes
- Verify
data.squeeze(-1)matchesnp.squeeze(data, -1)semantics for NumPy and Torch arrays at both call sites. - Check the new test imports
MetaTensorcorrectly and that tensor creation uses torch as intended. - Ensure
test_metatensor_preservedcorrectly asserts type and metadata are preserved.
Pre-merge checks and finishing touches
❌ Failed checks (1 warning)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Docstring Coverage | ⚠️ Warning | Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. | You can run @coderabbitai generate docstrings to improve docstring coverage. |
✅ Passed checks (2 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | ✅ Passed | Title accurately summarizes the main change: preventing implicit MetaTensor-to-NumPy conversion in ImageWriter.convert_to_channel_last by replacing np.squeeze with data.squeeze. |
| Description check | ✅ Passed | Description covers the issue (fixes #8619), explains the solution (replacing np.squeeze with data.squeeze), documents the test addition, and transparently reports unrelated pre-existing CI/test failures. |
✨ Finishing touches
- [ ] 📝 Generate docstrings
🧪 Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
FYI, the two checks with errors both failed due to running out of disk space.
premerge / build-docs:
Error: No space left on device : '/home/runner/actions-runner/cached/_diag/pages/32e19e11-3e44-443a-90c8-08ee0934371d_1af45fa4-32a3-4cdb-ada0-4b63109d0edd_1.log'
premerge / packaging:
Attempting uninstall: numpy
Found existing installation: numpy 2.0.2
Uninstalling numpy-2.0.2:
Successfully uninstalled numpy-2.0.2
Attempting uninstall: filelock
Found existing installation: filelock 3.19.1
Uninstalling filelock-3.19.1:
Successfully uninstalled filelock-3.19.1
ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device
These seem to be unrelated to the PR. This PR is ready for review.
Hi @sudomakeinstall thanks for the contribution. The errors with the CICD are related to running out of space on the Linux image Github provides, I'm working on revamping the CICD testing here which will address this, I hope I can get that through soon if it's a show stopper. As for the other issues, I think these are related to the versions of libraries that you're using in your environment, at least that's what I've seen when I had mismatched versions of things like mypy. I wouldn't worry if our tests are getting through, it seems the ones that aren't are unrelated to code issues.
@KumoLiu we should keep an eye here and see if the space issue is coming up for everyone now. Thanks!