l137295
l137295
Excuse me, my friend, my 5060TI also reports an error: RuntimeError: CUDA error: no kernel image is available for execution on the device 2025-06-12 20:49:44 CUDA kernel errors might be...
Excuse me, has this problem been resolved?
> > 经常群发信息是否会被微信风控机制检测到?群发信息间隔时间至少达到多少才能不被检测到 > > 我只发过一次消息就封了 确实,我也是,就是最近这几天不能搞,我就登录一下发送一次消息然后退出登录,过段时间,微信安全就提示了
> Same issue with me, install torch==2.7.0+cu128 and vllm nightly directly by pip works for me Excuse me,How to solve this problem on Docker?
> > Excuse me,How to solve this problem on Docker? > > [@l137295](https://github.com/l137295) Try this image `nvcr.io/nvidia/tritonserver:25.05-vllm-python-py3` or take a look at this [Dockerfile](https://github.com/fanyang89/my-vllm/blob/main/Dockerfile) Thank you very much, I'll try...