Results 17 comments of Thibescobar

> I think you can already technically run nnDetection on CPU, but I wouldn't recommend it for now (at least not in the default configuration) since the inference time will...

> [Please note this is highly experimental and not recommended!] You need to change > > https://github.com/MIC-DKFZ/nnDetection/blob/d41b5c0d64b6c7c85ca238373dd8d53121aa194d/nndet/inference/predictor.py#L53 > > which can be passed here > > https://github.com/MIC-DKFZ/nnDetection/blob/d41b5c0d64b6c7c85ca238373dd8d53121aa194d/scripts/predict.py#L101C28-L101C44 . > >...

> À ma connaissance, les GPU sont nettement plus rapides (facteur 100-200) que les CPU, les résultats semblent quelque peu raisonnables. De nombreux facteurs différents influencent le temps de traitement...

> To my knowledge, GPUs are significantly quicker (Factor 100-200) than CPU, to the results seem somewhat reasonable. There are many different factors influencing the total processing time like patch...

> To my knowledge, GPUs are significantly quicker (Factor 100-200) than CPU, to the results seem somewhat reasonable. There are many different factors influencing the total processing time like patch...

> hi [@Thibescobar](https://github.com/Thibescobar) , Has there been procress in production/deployment of the model? > > * the first step of creating the docker image and then using tensorRT. Is this...