AITemplate icon indicating copy to clipboard operation
AITemplate copied to clipboard

work with torchserve: ./tmp/CLIPTextModel/model-generated.h:3327: Pending model run did not finish successfully. Error: an illegal memory access was encountered

Open ericlormul opened this issue 1 year ago • 0 comments

I built a torchserve docker image on top of AITemplate docker. The demo code works fine in my docker. However, when I pack the AITemplate SD model to torchserve archiver and invoke an inference request, It outputs below errors.

The first highlighted logs indicates the AITemplate SD model is being loaded. Looks OK, but the second highlighted logs says "./tmp/CLIPTextModel/model-generated.h:3327: Pending model run did not finish successfully. Error: an illegal memory access was encountered" when I tried to invoke an inference task from torchserve. Any ideas on how to debug this? Or is it possible for AITemplate models to work with torchserve at this moment? Thanks!

2022-10-29T00:31:01,400 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - Set target to CUDA 2022-10-29T00:31:03,385 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:03] ./tmp/CLIPTextModel/model-generated.h:275: Init AITemplate Runtime. 2022-10-29T00:31:03,634 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:03] ./tmp/UNet2DConditionModel/model-generated.h:3262: Init AITemplate Runtime. 2022-10-29T00:31:03,662 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:03] ./tmp/AutoencoderKL/model-generated.h:678: Init AITemplate Runtime. 2022-10-29T00:31:06,315 [INFO ] W-9000-sd-v1-5_2.2 org.pytorch.serve.wlm.WorkerThread - Backend response time: 6164 2022-10-29T00:31:06,316 [DEBUG] W-9000-sd-v1-5_2.2 org.pytorch.serve.wlm.WorkerThread - W-9000-sd-v1-5_2.2 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-10-29T00:31:06,316 [INFO ] W-9000-sd-v1-5_2.2 TS_METRICS - W-9000-sd-v1-5_2.2.ms:7131|#Level:Host|#hostname:7435eee333e7,timestamp:1667003466 2022-10-29T00:31:06,316 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - /172.17.0.1:45166 "PUT /models/sd-v1-5?min_worker=1&synchronous=true HTTP/1.1" 200 7136 2022-10-29T00:31:06,317 [INFO ] W-9000-sd-v1-5_2.2 TS_METRICS - WorkerThreadTime.ms:14|#Level:Host|#hostname:7435eee333e7,timestamp:1667003466 2022-10-29T00:31:06,317 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:7435eee333e7,timestamp:1667003409 2022-10-29T00:31:48,406 [INFO ] W-9000-sd-v1-5_2.2 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1667003508406 2022-10-29T00:31:48,409 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - Backend received inference at: 1667003508 2022-10-29T00:31:49,477 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:49] ./tmp/CLIPTextModel/model-generated.h:3327: Pending model run did not finish successfully. Error: an illegal memory access was encountered 2022-10-29T00:31:49,477 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:49] ./tmp/CLIPTextModel/model-generated.h:248: Got error: no error enum: 700 at ./tmp/CLIPTextModel/model-generated.h: 617 2022-10-29T00:31:49,477 [WARN ] W-9000-sd-v1-5_2.2-stderr MODEL_LOG - [00:31:49] ./tmp/CLIPTextModel/model_interface.cu:92: Error: Got error: no error enum: 700 at ./tmp/CLIPTextModel/model-generated.h: 617 2022-10-29T00:31:49,479 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - Invoking custom service failed. 2022-10-29T00:31:49,479 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - Traceback (most recent call last): 2022-10-29T00:31:49,479 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/service.py", line 102, in predict 2022-10-29T00:31:49,479 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - ret = self._entry_point(input_batch, self.context) 2022-10-29T00:31:49,479 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/torch_handler/base_handler.py", line 232, in handle 2022-10-29T00:31:49,480 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - output = self.inference(data_preprocess) 2022-10-29T00:31:49,480 [INFO ] W-9000-sd-v1-5_2.2 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1071 2022-10-29T00:31:49,480 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/model-server/tmp/models/de091f5ad82f4a5dafed8e5d35304dfe/handler.py", line 62, in inference 2022-10-29T00:31:49,480 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - pil_imgs = self.model(prompt, random_seed, bs, disable_nsfw) 2022-10-29T00:31:49,480 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/model-server/tmp/models/de091f5ad82f4a5dafed8e5d35304dfe/model.py", line 23, in call 2022-10-29T00:31:49,481 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - images = self.model(prompt=[prompt]*4, generator=generator, num_images_per_prompt=4 if bs is None else bs).images 2022-10-29T00:31:49,481 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context 2022-10-29T00:31:49,481 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - return func(*args, **kwargs) 2022-10-29T00:31:49,481 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/model-server/AITemplate/examples/05_stable_diffusion/pipeline_stable_diffusion_ait.py", line 262, in call 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - uncond_embeddings = self.clip_inference( 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/model-server/AITemplate/examples/05_stable_diffusion/pipeline_stable_diffusion_ait.py", line 139, in clip_inference 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2 ACCESS_LOG - /172.17.0.1:54318 "POST /predictions/sd-v1-5 HTTP/1.1" 503 1089 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - exe_module.run_with_tensors(inputs, ys, graph_mode=False) 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2 TS_METRICS - Requests5XX.Count:1|#Level:Host|#hostname:7435eee333e7,timestamp:1667003409 2022-10-29T00:31:49,482 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/aitemplate/compiler/model.py", line 483, in run_with_tensors 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - outputs_ait = self.run( 2022-10-29T00:31:49,483 [DEBUG] W-9000-sd-v1-5_2.2 org.pytorch.serve.job.Job - Waiting time ns: 295810, Inference time ns: 1077028904 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/aitemplate/compiler/model.py", line 438, in run 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2 TS_METRICS - WorkerThreadTime.ms:6|#Level:Host|#hostname:7435eee333e7,timestamp:1667003509 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - return self._run_impl( 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/aitemplate/compiler/model.py", line 377, in _run_impl 2022-10-29T00:31:49,483 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - self.DLL.AITemplateModelContainerRun( 2022-10-29T00:31:49,484 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/aitemplate/compiler/model.py", line 192, in _wrapped_func 2022-10-29T00:31:49,484 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - raise RuntimeError(f"Error in function: {method.name}") 2022-10-29T00:31:49,485 [INFO ] W-9000-sd-v1-5_2.2-stdout MODEL_LOG - RuntimeError: Error in function: AITemplateModelContainerRun

ericlormul avatar Oct 29 '22 00:10 ericlormul