zhengkai
zhengkai
可以把先把markdown转成html ,然后在编辑文章的tinymce中的 Tools-->Source Code,把html复制进去
tv-s.info 只是写了一点。现在比较忙,没有继续做下去。
Baichuan2 没有问题,Baichuan1没有测试,你可以试试看 **注意**需要把这行改成Baichuan-13B-Chat https://github.com/billvsme/my_openai_api/blob/main/my_openai_api.py#L85
你好,请求Body部分的字段不用修改 ,和apidoc文档中一样就可以了
不好意思,我的环境复现不出来。从报错信息看,猜测可能和某些库的版本或者路径有关。 可能有用的信息: [https://github.com/tensorflow/tensorflow/issues/6968](https://github.com/tensorflow/tensorflow/issues/6968)
It's possible that the server is using some kind of hypervisor, which causes the link between the gpus to be very slow, which can seriously affect performance. I'm in a...
Replace a machine that doesn't use KVM and the speed will be normal. Note:when I change the machine, PHB -> SYS
convert ```meta-llama/Meta-Llama-3.1-70B-Instruct``` transformers must be upgraded to 4.43.x. When I use 4.43.3, I get the same error.
@sayakpaul 👌,thanks But I found one that was different from **train_text_to_image.py** and **train_text_to_image_lora.py**, **train_text_to_image_lora.py** didn't reassign the **args.mixed_precision**. In this way, if you specify ```accelerate launch --mixed_precision="fp16"``` in the accelerator,...
Maybe the example in the docs needs to be updated https://github.com/huggingface/diffusers/tree/main/examples/text_to_image