gukwon.ku

Results 1 comments of gukwon.ku

@miroslavLalev I tried `responseTimeout=5`(my model inference time is `10s`). after calling torchserve inference endpoint I found log like this: ``` 2024-03-13T23:40:02,848 [ERROR] W-9004-bert4rec_240314-083734 org.pytorch.serve.wlm.WorkerThread - Number or consecutive unsuccessful inference...