Junrui@Intel_SH
Junrui@Intel_SH
> It should have been fixed by #3356 Hi @suquark I just installed FastChat 0.2.36 and Pydantic 2.9.0 using `pip3 install "fschat[model_worker,webui]"`, but I'm still unable to start the inference...
@sgwhat Thanks for your reply. My platform detail info is: - CPU: 13th Gen Intel(R) Core(TM) i5-13600HRE - dGPU: Intel Arc A770 16G Sorry, the image is deleted. I downgraded...
@sgwhat Thanks. Looking forward to your feedback.
fastspeech可以支持用onnxruntime-openvino进行推理了吗?
@dmatveev Does performance of LLM on NPU rely on "remote tensors" feature? I also observed that the performance on NPU is worse than CPU.
same error 