BeerTai
BeerTai
Sorry, I am not very familiar with gerbil, I don’t know how to restart the web service. All gerbil services have been closed.
I deployed GERBIL successfully on my mac. when i tried to configure an experiment on page http://localhost:1234/gerbil/config and run it, the log shows some ERROR and the experiment result tell...
Thank you for your reply. I want to reproduce the results of REL (https://github.com/informagi/REL/blob/master/tutorials/03_Evaluate_Gerbil.md) on the Gerbil platform. The code and data are on the remote server, but when I...
Thanks for your reply! It helps me a lot. Running GERBIL locally, is it similar to [https://github.com/dalab/end2end_neural_el#gerbil-evaluation](url)?. Are there other ways? I'm not very familiar with Java.
请问解决了吗?同样单卡不报错,然后用accelearate时报同样的错,配置是2*24G 3090。batch_size=1也不行。报错的位置是: ``` File "/root/miniconda3/envs/llm/lib/python3.8/site-packages/peft/tuners/lora.py", line 565, in forward result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling "cublasCreate(handle)" ``` 训练脚本是: ``` accelerate launch src/train_sft.py \ --do_train...
> 你DeepSpeed进行多卡训练的执行脚本看一下, 脚本: ``` deepspeed --num_gpus=2 src/train_sft.py \ --deepspeed ds_config.json \ --do_train \ --dataset adgen_train \ --finetuning_type lora \ --output_dir adgen_lora \ --overwrite_cache \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 1 \...