dcmjs
dcmjs copied to clipboard
Normalization makes writing adapters fiddly for multi-volume series
(Related to #4)
Some scanners produce multi-volume series, whereby the instance number linearly increases, but there may be multiple volumes with e.g. different b-values or multiple phases.
Normalizers in dcmjs
currently sort by ImagePositionPatient
, such that if you have two sub volumes in the same series and your images are single-framed, Normalizer.normalizeToDataset
returns a multiframe dataset which contains two interspliced volumes.
This makes writing adapters difficult/awkward if the source lib orders by instance number. Currently the cornerstone
segmentation adapter will produce a segmentation with incorrect PerFrameFunctionalGroups
for such multi-volume images.
There are a few solutions I can think of off the top of my head, which each have different drawbacks.
-
Normalizer.normalizeToDataset
returns both a dataset and a map of input dataset indicies to frames within the normalized set.- One can then make sure that, for derived images and adapters, the reordering is accounted for. However this feels like it kind of defeats the purpose of normalization.
- Using
dcmjs
just to convert single frame to multiframe will still result in interspliced volumes.
- Optional argument on
Normalizer.normalizeToDataset
to not-reorder slices .- Adapters that know this problem will cause issues on a particular platform can choose not to re-order slices.
- Numerically, sub volumes are very difficult to differentiate from badly indexed slices of a single volume, so there is no good programmatic way to know when to not sort the data. I acknowledge this is a common problem across the space. Do we want to/even have the resources to maintain private field filtering in dcmjs to try to work out what to do?
- You can use a metadata provider in a cornerstone adapter to match slices of the source data to the derived dataset via the
ReferencedSOPClassUID
.- This would be the easiest to "tack on", but its a huge waste of resources to re-order the slices just to search through them and effectively re-re-order them.
- Same issue about using dcmjs to convert from singleframe to multiframe.
As you can see all approaches are rather unsatisfactory, thoughts?
Perhaps the best approach would be to not try to sort the given datasets at all, in general? But just concatenate them.
What kind of scans do you have? The idea of the Normalizers is that you have acquisition-specific subclasses that are currently selected by SOPClassUID.
But those are broad categories, so if you want to sort certain types of acquisitions in different ways we could come up with a plug-in model whereby different Normalizers could examine the datasets and offer options about how to load them. This is how Slicer's DICOMPlugins work and it's turned out to be a pretty reasonable way to manage the complexity.
What kind of scans do you have?
I'll see if I can share this data, its just CT Image Storage
, with two phases in one series, stacked one after another in instance number.
Yes, that should really be normalized to a legacy covered CT image with correct dimension encoding.
To address Andrey's point, maybe we could come up with samples that represent the essential structure of the data but with any identifiers stripped (including image so it's a true safe harbor).
On Tue, Jul 30, 2019 at 8:38 AM James Petts [email protected] wrote:
What kind of scans do you have?
I'll see if I can share this data, its just CT Image Storage, with two phases in one series, stacked one after another in instance number.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/dcmjs-org/dcmjs/issues/70?email_source=notifications&email_token=AAA6Y7KM5FMB6OMAY4HNW5TQCBOADA5CNFSM4IH4SOY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3EMN4I#issuecomment-516474609, or mute the thread https://github.com/notifications/unsubscribe-auth/AAA6Y7NSAA5MUID6RGAJ6DLQCBOADANCNFSM4IH4SOYQ .
Do we want to/even have the resources to maintain private field filtering in dcmjs to try to work out what to do?
I can't see how you can go around parsing private attributes if you want to have happy imaging researchers. I have limited experience, but I think it might be ok to ignore those private fields to meet the needs of radiologists, but I would think OHIF et al want to address various quantitative analysis use cases which often cannot be addressed unless private attributes are parsed or normalized into standard attributes. Where that normalization functionality fits, that I don't know...
The same problem will exist in any of our code intended to work with real-world dicom data where the scanner vendors embed important information in non-standard locations. The only solution I know of is to hard-code workarounds for known cases and issue stern warnings when we see data that is not in a known format. Ideally we'll be able to build an open-source knowledge base with example data across dcmjs, highdicom, and other software so that we handle as many cases as possible consistently.
Where that normalization functionality fits, that I don't know...
within dcmjs that would be in specialized normalizer subclasses that recognize particular format variants and map them to normalized form. This is not implemented yet, but it would be similar to what Slicer DICOMPlugins do, but creating normalized DICOM instances instead of MRML.