SmallShark
SmallShark
> 您好,格式如下: -----BEGIN PRIVATE KEY----- XXXXXXXXXXXXXXXXXXXXXXXXXXXX -----END PRIVATE KEY----- 谢谢老哥,解决了
可以的,-----BEGIN PRIVATE KEY----- -----END PRIVATE KEY----- 也要复制,选择字符串的那个选项,选择文件electerm似乎识别不出隐藏文件 ZHAO Xudong ***@***.***> 於 2024年3月14日 週四 下午2:16寫道: > 您好,格式如下: -----BEGIN PRIVATE KEY----- XXXXXXXXXXXXXXXXXXXXXXXXXXXX -----END > PRIVATE KEY----- > > 谢谢老哥,解决了 > > 请问是最新版可以了吗?...
try adjusting the --inference_tp_size to a lower number, it may be you don't have enough GPUs across your nodes. [[bug]AttributeError: 'DeepSpeedHybridEngine' object has no attribute 'mp_group' #525](url)
try adjusting the --inference_tp_size to a lower number, it may be you don't have enough GPUs across your nodes.
可能是由于GPU或RAM内存不足造成的 我一开始也遇到了这个问题,我的配置是这样的 torch:2.2.1 cuda:12.1 cudnn:8 python:3.10 GPU:A40 48G (开启 deepspeed ,使用 ZerO3,bf16) RAM:52G model:Llama-2-7b-chat-hf 后来更改了配置可以工作了,配置如下: CPU num:56 RAM size: 256G GPU: V100 16G * 8 我开启了deepspeed,同时关闭了bf16 TF32,使用fp16 因此需要对官方的bash和deepspeed的json进行修改: `{ "fp16":...
更换openai的版本,我的问题就这样解决了 pip install openai==0.28.0
> try adjusting the `--inference_tp_size` to a lower number, it may be you don't have enough GPUs across your nodes. thanks,it work
Fixed in PyMuPDF-1.24.14. Thanks for the update
Yes, I've encountered the same problem as you. I used the PI extension to expand the length of gemma2 to 16K. Currently, the first issue is that vLLM does not...
Yes, I've encountered the same problem as you. I used the PI extension to expand the length of gemma2 to 16K. Currently, the first issue is that vLLM does not...