YSLLYW

Results 21 comments of YSLLYW

> As mentioned by @zcaicaros, `GATConv` supports mini-batch computation by wrapping each `data` object into a batch via `torch_geometric.data.DataLoader`. However, it does not support static graph computation yet (different feature...

> 我也遇到了同样的问题,13B的效果比7B的要差不少。我是自己转换的模型,不知道问题出在哪里。 模型用的是下面两个: https://huggingface.co/minlik/chinese-alpaca-7b-merged https://huggingface.co/minlik/chinese-alpaca-13b-merged 我下载了您发布的7b模型,问答效果效果几乎没有,只会重复问题: ![1682922992194](https://user-images.githubusercontent.com/104432582/235417524-293ee360-8b04-4765-b806-4fcd9ade5b49.png)

> 我也遇到了同样的问题,13B的效果比7B的要差不少。我是自己转换的模型,不知道问题出在哪里。 模型用的是下面两个: https://huggingface.co/minlik/chinese-alpaca-7b-merged https://huggingface.co/minlik/chinese-alpaca-13b-merged 请问您测试的效果如何?

> hi @JoeStrout Can you add when calling ?`device_map={"":0}``PeftModel.from_pretrained` I have encountered the same problem, my version is peft=0.2.0. I wonder if you have resolved this issue?

> I just tried to do it via Spaces (HuggingFace), on GPU-enabled hardware, using the limerick dataset. I get the exact same error, a traceback ending in: > > python3.8/site-packages/bitsandbytes/autograd/_functions.py",...

I have encountered the same error as you. Have you resolved this error now?

Thank you very much for your answer. Did you also encounter this error in multiple GPU environments? What specific part of the code should I modify?

Yes, I changed the tokenizer_config and changed the value of 'tokenizer_class' to "LLaMATokenizer"

> 是的,如果您向分词器添加新令牌,则应调整模型大小 `model.resize_token_embeddings(len(tokenizer))` Hello, an error was reported in the case of multiple GPUs. How can I resolve this issue?According to your suggestion, it won't work? RuntimeError: CUDA error: device...

> @YSLLYW I met the same problem, have you resolved it? 多卡训练的吗?