BraTS-Toolkit icon indicating copy to clipboard operation
BraTS-Toolkit copied to clipboard

BraTS-preprocess stuck at "status reveived: { 'code': 201, 'message': 'nifti examination queued!'}."

Open Lucas-rbnt opened this issue 2 years ago • 32 comments

Hi everyone, I work on a computing server. And by wanting to use single preprocessing I get stuck at:

status reveived: {'code': 201, 'message': 'input inspection queued!'}
status reveived: { 'code': 201, 'message': 'nifti examination queued!'}.

I assume the BraTS server is running locally? If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?

Sorry for the inconvenience, Lucas Robinet.

Lucas-rbnt avatar Feb 06 '23 17:02 Lucas-rbnt

Thank for your interest in BraTS Toolkit.

"I assume the BraTS server is running locally?" it should 🙈

What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

neuronflow avatar Feb 08 '23 22:02 neuronflow

Hi,

Sorry for the answer delay. I'm using Docker daily so I stuck with my workflow.

I assumed it was possible since NVIDIA docker is a wrapper around but maybe I am wrong here ?

Thanks again for your answer, Lucas.

Lucas-rbnt avatar Feb 10 '23 17:02 Lucas-rbnt

What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

neuronflow avatar Feb 10 '23 19:02 neuronflow

Hello from Docker!
This message shows that your installation appears to be working correctly.

....

Lucas-rbnt avatar Feb 13 '23 08:02 Lucas-rbnt

Can you please show the full output, including GPU and cuda version?

"If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?"

Can you elaborate what you mean here? :)

neuronflow avatar Feb 13 '23 10:02 neuronflow

I'm working on a computing server with no web server, I thought that maybe the problem comes from here ?

You mean the BraTS output ? Because even trying to work with cpu-only mode, I'm still stuck at this part of the process.

Otherwise, The compute server has 4 GeForce 2080 Ti, Cuda 11.6.

Lucas-rbnt avatar Feb 13 '23 11:02 Lucas-rbnt

"I'm working on a computing server with no web server," Cannot follow you, sorry, please elaborate.

What happens internally: The backend is started in a docker, and it opens a local flask server that is communicating with the python frontend via WebSockets.

We do have another preprocessing pipeline not requiring docker that will be published soon.

neuronflow avatar Feb 13 '23 11:02 neuronflow

Thank you for your answer.

So I guess, my problem might come from the lack of a graphics server on my compute server.

Lucas-rbnt avatar Feb 14 '23 07:02 Lucas-rbnt

No, BraTS Toolkit can run headless without trouble.

neuronflow avatar Feb 14 '23 08:02 neuronflow

Oh thanks again.

Then I have no idea why it's blocked at this stage.

Lucas-rbnt avatar Feb 14 '23 08:02 Lucas-rbnt

Are other dockers running on the system, which ports are taken already? Can you show the full output from the hello-world?

https://github.com/neuronflow/BraTS-Toolkit/blob/master/0_preprocessing_single.py did you confirm processing of the exam? otherwise try setting the confirm parameter to False.

neuronflow avatar Feb 14 '23 09:02 neuronflow

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Some ports are already taken of course but not the 5000 dedicated to Flask. No container are running currently on the system. I tried both cpu and gpu mode and confirm=True and confirm=False

Is BraTS-Toolkit using a docker image that needs login to be pulled ?

Lucas-rbnt avatar Feb 14 '23 09:02 Lucas-rbnt

This appears to be the wrong hello world.

What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

neuronflow avatar Feb 14 '23 11:02 neuronflow

Can you elaborate "wrong hello world" ?

I tried my regular Docker installation and I also change my config to do it the nvidia-ctk way. Everything works as expected in their documentation and my outputs (including the hello-world one) match the documentation ones

Lucas-rbnt avatar Feb 14 '23 11:02 Lucas-rbnt

Please read the link: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

Perhaps the hello world is confusing you, please run with and without sudo:

sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

and post the output.

neuronflow avatar Feb 14 '23 12:02 neuronflow

Ah yes I thought you wanted the output of the hello-world container ahah, I didn't quite understand why.

It seems to work fine, I'm back on my GPUs

Unable to find image 'nvidia/cuda:11.6.2-base-ubuntu20.04' locally
11.6.2-base-ubuntu20.04: Pulling from nvidia/cuda
[PULLING PROCESS]
Status: Downloaded newer image for nvidia/cuda:11.6.2-base-ubuntu20.04
Tue Feb 14 11:56:02 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| 25%   41C    P2    57W / 250W |   1774MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:43:00.0 Off |                  N/A |
| 29%   30C    P8     1W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  Off  | 00000000:81:00.0 Off |                  N/A |
| 29%   26C    P8     2W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  Off  | 00000000:C1:00.0 Off |                  N/A |
| 29%   26C    P8    15W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Lucas-rbnt avatar Feb 14 '23 12:02 Lucas-rbnt

Okay, your docker installation seems to be fine. Which data are you trying to process? What do you see if you type docker ps ?

neuronflow avatar Feb 14 '23 13:02 neuronflow

Trying to process private data. Every sample is a *.nii file. docker ps returns my greedy_elephant container running

Lucas-rbnt avatar Feb 14 '23 13:02 Lucas-rbnt

What happens if you process the example data?

neuronflow avatar Feb 14 '23 13:02 neuronflow

Well I'm sorry, I think I know the origin finally, by rereading your BraTS Toolkit paper, the registration is done on the T1 (and not on the T1ce as on similar tools). In my case I don't have these 4 modalities and I use only two (FLAIR and T1ce) and I put the t1ce file as t1 but I guess the registration doesn't manage to be done because trying the BraTS toolkit on the 4-modalities BraTS data it worked.

Is that the problem?

Lucas-rbnt avatar Feb 14 '23 13:02 Lucas-rbnt

Yes, very likely.

I have an alternative t1-c centric preprocessing pipeline that can deal with fewer modalities that we can hopefully publish soon.

neuronflow avatar Feb 14 '23 13:02 neuronflow

Yes, sorry to have wasted your time on this issue. Do you have a date for the t1-c centric alternative? We try to harmonise our pre-processing as much as possible and the python API of your tool offers a considerable advantage which makes it a big plus in our processing phases.

Lucas-rbnt avatar Feb 14 '23 13:02 Lucas-rbnt

No worries.

Would you be interested in investing time and serving as a beta tester? If so, we can set up a call and discuss :)

neuronflow avatar Feb 14 '23 13:02 neuronflow

Yes of course it could be very interesting. Also I would really like to be able to integrate the tool into my Python routine for my research.

Lucas-rbnt avatar Feb 14 '23 13:02 Lucas-rbnt

@Lucas-rbnt still interested? It would be ready for the first tests now.

neuronflow avatar Oct 18 '23 21:10 neuronflow

Yes I am !

Lucas-rbnt avatar Oct 23 '23 13:10 Lucas-rbnt

I wrote you on LinkedIn let's coordinate there :)

neuronflow avatar Oct 23 '23 14:10 neuronflow

@Lucas-rbnt please see the post above. Also:

Trying to process private data. Every sample is a *.nii file. docker ps returns my greedy_elephant container running

can you try with .nii.gz files?

see: https://github.com/neuronflow/BraTS-Toolkit/issues/18

neuronflow avatar Oct 30 '23 20:10 neuronflow

Hey, sorry to bother but I have the same problem although I have all the modalities. It hangs at

status received: {'code': 201, 'message': 'input inspection queued!'} status received: {'code': 201, 'message': 'nifti examination queued!'}

I don't know what to do. My files are .nii not .nii.gz.

abdullahbas avatar Jan 14 '24 21:01 abdullahbas

Until this issue https://github.com/neuronflow/BraTS-Toolkit/issues/18 is closed you need .nii.gz files. Just renaming is enough, you don't actually need to compress them. You can also try our new preprocessing toolkit which is much more capable and under active development: https://github.com/BrainLesion/preprocessing You can use it like this: https://github.com/BrainLesion/preprocessing/blob/main/example_modality_centric_preprocessor.py

neuronflow avatar Jan 14 '24 21:01 neuronflow