How to Perform Inference on the Fly Instead of Using Files in nnUNet v2?
Hello,
I am currently using nnUNet v2 for inference and would like to know if there is a way to perform inference on the fly instead of relying on files. My goal is to directly input data and get predictions in real-time without the intermediate step of writing and reading files.
Questions:
- Is there a built-in method in nnUNet v2 that allows for on-the-fly inference?
- If not, what would be the best approach to modify the current prediction pipeline to achieve this?
- Are there any examples or documentation available that could guide me through setting up real-time inference with nnUNet v2?
Additional Context:
I am working on a project that requires rapid inference times and minimal latency. Any guidance on how to implement this, including code snippets or pointers to relevant parts of the codebase, would be greatly appreciated. Thank you for your assistance!
Hi,
I was wondering about the same.
The authors have some nice example code snippets in the readme file of the inference folder:
https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunetv2/inference/readme.md
There you have several options that are quite useful.
I assume this answered your questions, hence I will close this issue. Feel free to re-open if necessary.