FAST-Pathology icon indicating copy to clipboard operation
FAST-Pathology copied to clipboard

programe exits when import pictures

Open muskmelonxy opened this issue 2 years ago • 12 comments

I have tried different types of pictures including tif and npdi and whenever I import pictures, the programe exits immediately. Does it must run with Nivida GPU,

Environment: Windows 11 64bit with opencl installed. CPU Ryzen 3600, GPU 6700xt

muskmelonxy avatar Sep 05 '21 07:09 muskmelonxy

Hi. The technology used in FastPathology, OpenCL and OpenGL, should work on AMD processors as well. However, we have only tested it on Intel and Nvidia processors so far. So there could be a vendor specific bug. Do you get some kind of error message if you run it from the terminal?

smistad avatar Sep 05 '21 13:09 smistad

@andreped @smistad

I have similar issues when using any tiff which is not converted using vips.

I think this happens if its a non-pyramidal tiff or if its exported from another software other than vips.

For example, from the example dataset in NoCodeSeg, I used: ID-114_HE_inactive.ndpi

The ndpi opens fine on FastPathology and QuPath. I exported the ndpi from QuPath as a tiff and tried opening on FP. The software exited. Output from verbose mode:

INFO [25460] Loaded configuration file: C:/Program Files/FastPathology/bin/fast_configuration.txt
INFO [25460] Test data path: C:/ProgramData/FAST/data/
INFO [25460] Kernel source path: C:/Program Files/FastPathology/bin//..//kernels/
INFO [25460] Kernel binary path: C:/ProgramData/FAST/kernel_binaries/
INFO [25460] Documentation path: C:/Program Files/FastPathology/bin//..//doc/
INFO [25460] Pipeline path: C:/ProgramData/FAST/pipelines/
INFO [25460] Qt plugins path: C:/Program Files/FastPathology/bin//..//plugins/
INFO [25460] Library path: C:/Program Files/FastPathology/bin//..//bin/
INFO [25460] Creating new QApp
INFO [25460] Creating new GL context for computation thread
INFO [25460] Large screen detected with width: 3840
Temporary path (UPDATED!): C:/Users/Pradeep/AppData/Local/Temp/fastpathology-PWathe
Does folder exist:1
Current temporary project folder location:C:/Users/Pradeep/AppData/Local/Temp/fastpathology-PWathe/project_17409488
INFO [25460] QApp already exists..
INFO [25460] Found 1 OpenCL platforms.
INFO [25460] 1 platforms selected for inspection.
INFO [25460] Platform 0: NVIDIA Corporation
INFO [25460] This platform has 1 available devices in total
INFO [25460] Looking for GPU devices only.
INFO [25460] 1 selected.
INFO [25460] Inspecting device 0 with the name NVIDIA GeForce RTX 3080
INFO [25460] There are 1 devices that can be associated with the GL context
INFO [25460] 0000022C7CB22F10 - 0000022C7CB22F10
INFO [25460] Device has OpenGL interop capability
INFO [25460] The device was accepted.
INFO [25460] NVIDIA GeForce RTX 3080 has 68 compute units
INFO [25460] The device NVIDIA GeForce RTX 3080 got a score of 1000068
INFO [25460] The platform NVIDIA CUDA was selected as the best platform.
INFO [25460] A total of 1 devices were selected for the context from this platform:
INFO [25460] The best device was: NVIDIA GeForce RTX 3080
INFO [25460] The following device was selected as main device: NVIDIA GeForce RTX 3080
INFO [25460] Writing directly to 3D textures/images is NOT supported on main device
INFO [25460] Resizing window to 1024 1024
INFO [25460] Window not minimized; turning ON synchronized rendering
INFO [25460] Trying to start computation thread
INFO [25460] Computation thread started
INFO [25460] Window not minimized; turning ON synchronized rendering
INFO [25460] Drag event received in window widget
INFO [25460] Drop event received in window widget
INFO [25460] Dropped file:D:/Colon_sections/ID-114/ID-114_HE_inactive_qupath_tiff_downsampled.tif
Selected file: D:/Colon_sections/ID-114/ID-114_HE_inactive_qupath_tiff_downsampled.tif
INFO [25460] EXECUTING WholeSlideImageImporter because PO is modified.

I exported it as an ome tiff from QuPath and got the same issue.

I have included a link where you can download the converted files that gave errors..

https://cloudstor.aarnet.edu.au/plus/s/2o8DAnFYH408AAE

I've included the vips converted file that works.

Cheers Pradeep

Cheers Pradeep

pr4deepr avatar Jan 07 '22 00:01 pr4deepr

@pr4deepr I believe @muskmelonxy's issue is related to him using AMD processors, which we have not have time to properly test and develop against. In theory it should work, but there are likely some things we need to fix. He also used Win11, which we don't have a machine to test on (yet), but in general, softwares on Windows is quite well forward (and backward) compatible.

And yes, FAST supports reading pyramidal images, but I think the main reason it fails is because FAST uses OpenSlide for reading WSIs and OpenSlide does not support OME-TIFF (there is a pull request here to add this to OpenSlide).

The reason why QuPath succeeds in reading this image, is because QuPath can read images using either OpenSlide or BioFormats (also other readers, I believe). BioFormats is likely able to read this image. Initially, we wanted to add BioFormats to FAST as well. But the issue is that BioFormats is written in Java. Someone has attempted to make a wrapper for it to C++ (see here). But as you can see from the link, the project has been archived a long time ago. So we are unable to add BioFormats support.

It is also outside the scope of our project to add supports for other formats. I believe it is better that the development team at OpenSlide does that effort. As you can see from the PRs, there has been a lot of attempts to add support for various new formats, such as CellSens VSI, Philips' iSyntax, OME-TIFF, etc...

What vips really does is that it converts your WSI to a TIFF format OpenSlide supports. And thats why you can read these images in FP.

What I don't understand is that you are creating downsampled versions of the .ndpi and then trying to read these into FP (the .ndpi already contains downsampled versions of the full-resolution image -> hence, pyramidal image). Why are you doing that? Are you creating downsampled versions of the WSI to be able to run inference and train models on downsampled patches? There are other ways around this.

andreped avatar Jan 07 '22 05:01 andreped

Thanks for this @andreped That gives me more context!

I downsampled the images just so I don't have to deal with large .tif file exports and I can upload small files for this use case.

Otherwise, our WSIs are MRXS file formats (3D Histech) and I have to convert them to pyramidal tiffs. For my current workflow, I

  • open MRXS files in QuPath
  • colour deconvolution to get the hematoxylin channel (ImageJ colour deconvolution to get Hematox channel with white background)
  • save as tiff from qupath
  • convert tiff to pyramidal tiff in vips
  • use the vips tiff in FP for prediction

The reason I'm using hematoxylin channel only is because ours is not H&E, its H & DAB pink &DAB brown. As such, the DAB pink is quite a different stain than eosin, and will label different areas based on the protein of interest. Because of this, I have to extract the H channel for FP prediction.

I downsample the images so I don't have to deal with big images as it will become an issue down the track with large file numbers.

I haven't optimised my workflow, its just an initial approach. Perhaps you can give me input on this? Does FP support all files read by OpenSlide? If so, it should read MRXS files, right?

THanks

PRadeep

pr4deepr avatar Jan 07 '22 05:01 pr4deepr

AFAIK MRXS is supported by OpenSlide (see here).

But I realized that I had forgotten to add this as an option in the selectFile method. I therefore made a commit that adds support for MRXS just now: https://github.com/AICAN-Research/FAST-Pathology/commit/d810332400647e2560994b0c98dbcd173084e92a

To use the most recent version of FP (nightly, which has not yet been released). You can download the produced artifact: https://github.com/AICAN-Research/FAST-Pathology/actions/runs/1666348776

Simply download the file "Windows package", unzip, and run the installer to update your FP version.

That means that if you'd like, you can run inference directly on the raw MRXS format WSIs for deployment. However, FAST does not currently have a color deconvolution method. This is something we surely will add in the near future, but currently, it does not exist. I have a potential workaround, until said solution exists, but it is not ready yet.

However, I believe if you train your models with smart color augmentation, your final model should work well on H&E, H&DAB pink &DAB brown, as epithelium segmentation is an easy task and not that dependent on "color/staining information".

Conclusion: You have to do the QuPath preprocessing workaround to run inference with our H&E model to create annotations on your data, but when you have your trained model ready (trained on your stain type with some color augmentation), you can run inference in FP directly on the raw MRXS format, without doing color deconvolution. I believe this should work rather well.

andreped avatar Jan 07 '22 06:01 andreped

Awesome! Thanks for that.. I'd be keen on the workaround when its ready. In regard to the training a model, I think hematoxylin may be the only consistent marker across all preparations.. I need to confirm this.

Appreciate the prompt responses!

Thanks a lot.

pr4deepr avatar Jan 07 '22 06:01 pr4deepr

I can have a workaround quite fast - today even. But in order to properly test the workaround, it would be nice to have a trained model I could use with the pipeline. It does not need to be a state-of-the-art model in any ways. Just a model that seem to be doing OK and is finding both classes somewhat well.

Then I need to see the prediction you get when you run inference on the H-image generated in QuPath and used for inference in FP - for comparison.

Hopefully, running my new pipeline (including color deconvolution, using the resulting H-image as input for your trained model, should work well on your raw WSIs (.mrxs) - without the need to jump to QuPath each time for deployment - the whole point with deployment is that it should be seemless, requiring little to no user-interaction ;) (at least somewhat)

andreped avatar Jan 07 '22 06:01 andreped

Sorry, with the workaround do you mean: running my new pipeline (including color deconvolution, using the resulting H-image as input for your trained model, should work well on your raw WSIs (.mrxs)

If so, the published Epithelium_CD3 model does quite well on the hematoxylin only colour deconvoluted images. So, my plan is to use the H-image and then the Epithelium CD3 model in FP. Essentially, when you have the pipeline ready, I would use it for hematoxylin only predictions for now.. This would simplify my workflow considerably

The annotations generated by the model above are not perfect but they still work quite well. The plan is to import them into QuPath and then split the annotations into multi-classes, which can then be used to train a model in the future. This may not be in the near future though..

No rush on the pipeline.. I may only be able to get to it next week

pr4deepr avatar Jan 07 '22 07:01 pr4deepr

Oh, I see. But then, if you have a couple of WSIs in the mrxs format, I could try to setup the pipeline, using the published Epithelium_CD3 model to generate predictions in FP, using my workaround to provide the hematoxylin-only image as input to the network. Would be great if you could send me corresponding predictions that you get from FP (produced .tiff files) when you used the H-image generated from QuPath as input to the model.

Or you could send me the generated H pyramidal TIFF images that you generated in your qupath-vips workflow as well, and I could generate the predictions myself.

andreped avatar Jan 07 '22 07:01 andreped

Yup, that sounds great..I'll get on to that. Also, due to issues with it being clinical data, I can't data publicly. The link can be emailed to you. Would you mind sharing your email and I'll send the data early next week?

pr4deepr avatar Jan 07 '22 08:01 pr4deepr

Oh, yeah, to do all that we also need to create some type of data sharing agreement, which is annoying.

OK, I know that FP can read your mxrs WSIs. That works. At least it works with the public mxrs images I tried. I also tried running inference on one, that works. Therefore, the only thing we need to test is if you get the same predictions if you are using QuPath's color deconvolution method to generate the H-image and using my workaround. If that works, then I can share the workaround with you.

Therefore, if you do generate H-images on some selected WSIs from this public dataset: https://dataverse.no/dataset.xhtml?persistentId=doi:10.18710/TLA01U

That should suffice for testing this idea. Then I don't need access to your clinical data.

andreped avatar Jan 07 '22 08:01 andreped

Hi @andreped I've uploaded the colour deconvolution from QuPath and the FP predictions. https://cloudstor.aarnet.edu.au/plus/s/WaKwYICmASsV4TY The images have been downsampled by 2 for fast prediction using the CD3 model.

I used these 3 datasets from the datasets you shared:

  • ID-82_HE_active
  • H_ID-93_HE_inactive
  • H_ID-114_HE

Hope this helps. Let me know if you need anything else.

Cheers Pradeep

pr4deepr avatar Jan 10 '22 02:01 pr4deepr

@pr4deepr A separate issue was made a while back for this issue: https://github.com/AICAN-Research/FAST-Pathology/issues/13

Just recently, the issue with the MRXS file format you were observing, has now been resolved: https://github.com/AICAN-Research/FAST-Pathology/issues/38#issuecomment-1671333129

Would be great if you could verify that it now works as intended. You can download the latest artifact to get this fix, or wait until a new release is made.

The original issue in this thread was likely due to FP now being compatible with AMD processors/GPUs. As most of this thread revolved about a separate issue, to avoid clutter, I will close this issue. A separate issue can be made if AMD support is of concern in the future.

andreped avatar Aug 10 '23 13:08 andreped