medicaldetectiontoolkit
medicaldetectiontoolkit copied to clipboard
Hey, does this allow inferencing on CPU after the model deployment?
I am working on LVO detecton task. After create a custom model, can i convert it into torchscript and depoy it using docker and use it for inference on device with no GPU?