Synb0-DISCO icon indicating copy to clipboard operation
Synb0-DISCO copied to clipboard

Docker error when performing inference in b0

Open jottinog opened this issue 3 years ago • 1 comments

Hi,

I'm new to the Synb0 pipeline and just kept running into this error where the process gets killed/aborted while performing the inference steps (extra/pipeline.sh, line 38) which results in no b0_u and no b0_all images and so in no correction performed.

All my input files (T1.nii.gz, b0.nii.gz) are in the the right folder and the acqparams.txt also seems fine to me:

-1 0 0 0.097 -1 0 0 0.000

(the distorted b0 was acquired L>>R)

I'm attaching all that's been printed on the terminal since called the function

error_synb0.txt

I gave the docker 4 CPUs, 8 GB RAM, 2 GB SWAP and 72 GB in disk image size to use.

Best, Jonatan

jottinog avatar May 11 '21 21:05 jottinog

Hello Jonatan,

I apologize for the late reply. The pipeline can be resource hungry, so usually the best thing to try is to give docker all possible resources. My guess is that 8GB isn't enough RAM, but I'm not sure.

Colin

hanscol avatar Jun 11 '21 18:06 hanscol

Hello, this issue is a bit old, but I was wondering if @jottinog solved the problem by increasing the RAM size, as suggested.

alestella avatar Jan 18 '23 09:01 alestella

Hello - yes, when the program is killed on inference it is always a RAM problem. If you could give your Docker/container access to more RAM it will successfully complete!

schillkg avatar Jan 18 '23 12:01 schillkg

Thanks so much for your tip! I could not manipulate the RAM as I was using a Docker Desktop on Windows 10 based on Windows Subsystem for Linux 2, and I had to change the .wslconfig file, which would have stolen RAM from other sources. However, I managed to successfully pull the docker within an Ubuntu 20.04 operating system, and I ran the script, which gave me the expected outputs. Thanks so much for sharing the code and the instructions, and for your help!

On Wed, Jan 18, 2023 at 1:42 PM Kurt Schilling @.***> wrote:

Hello - yes, when the program is killed on inference it is always a RAM problem. If you could give your Docker/container access to more RAM it will successfully complete!

— Reply to this email directly, view it on GitHub https://github.com/MASILab/Synb0-DISCO/issues/21#issuecomment-1386998296, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHRZNQS4VCWYE5NPFCIVHOLWS7QMDANCNFSM44W5QDNQ . You are receiving this because you commented.Message ID: @.***>

--

Alessandra, Stella Caporale ITAB, Institute for Advanced Biomedical Technologies Via Luigi Polacchi, 66100 Chieti CH, Italy Email: alessandra.c @.>@.

alestella avatar Jan 18 '23 14:01 alestella

Hello, I am running into this same error, process getting killed during inference step. I have given Docker access to 12GB of RAM as suggested in the README, and all 4 CPUs on my machine. Still does not run.

For reference I'm on Mac OS 12.5, with 1.4GHz quad-core Intel processor and 16GB RAM. My OS also is not compatible with PyTorch CUDA GPU-acceleration that is required for the un-containerized version of the pipeline to run.

Are there any other workarounds to run this successfully on my OS?

Thanks.

esphilli avatar Apr 14 '23 23:04 esphilli

Hello - this is almost certainly a RAM issue. While all of our tests and our own pipelines are run with 8-12GB RAM, sometimes it seems some systems require more. I suggest increasing RAM (give it access to 14 or 16 and the docker will be successful. Our next iteration we hope to make this more lightweight.

schillkg avatar Apr 15 '23 15:04 schillkg