Ruonan Wang
Ruonan Wang
Hi @krishung5 , I meet the same problem here. the step 3 fails for me with the following output:  And Triton server seems working but receive no request :...
hi @krishung5 I just follow `Serve a Model in 3 Easy Steps` and run following commands: ``` # Step 1: Create the example model repository git clone -b r22.07 https://github.com/triton-inference-server/server.git...
@TheaperDeng How do you think of this?
doc test : https://ruonantetdoc.readthedocs.io/en/forecaster_save_doc/doc/Chronos/Howto/index.html
Maybe use InferenceOptimizer instead of Trainer here.
``` # ipex model_ipex = Trainer.trace(model, accelerator=None, use_ipex=True, ...) model_ipex(*args) # jit model_jit = Trainer.trace(model, accelerator="jit", input_sample=input_sample, ...) model_jit(*args) # jit + ipex model_jit_ipex = Trainer.trace(model, accelerator="jit", input_sample=input_sample, use_ipex=True, ...)...
**reproduce result** Not only Autoformer, if you carry out the same process on TCN, same issue will happen when **forecaster.num_processes = 1**.(no doubt all forecasters inherited from basepytorchforecaster(e.g. S2S, TCN,...
Hi @avitial, I meet the same error when I use GPU/VPU plugin which works well in CPU: ```bash File "C:\Users\ruonanw1\Miniconda3\envs\ruonan_nano\lib\site-packages\openvino\runtime\ie_api.py", line 387, in compile_model super().compile_model(model, device_name, {} if config is...
Hi @rahulunair , if you follow below steps to create a llm-cpp conda env and pip install ipex-llm[cpp], ```bash conda create -n llm-cpp python=3.9 conda activate llm-cpp pip install --pre...