Daze Lu

Results 5 issues of Daze Lu

I use the following command to run triviaqa task ` lm_eval --model hf \ --model_args pretrained=../llama/models_hf/7B \ --tasks triviaqa \ --num_fewshot 1 \ --device cuda:2 \ --batch_size 8 ` I...

bug

I use Tesla k80 to train the bert_base model by using the command 'GPU1 python train.py train_bert_base', and I change the experiments.conf of bert_base as the following image: ![image](https://user-images.githubusercontent.com/69768456/226658432-3b6ed781-9347-44a2-bb16-805eacc3dd80.png) And...

不知有没有人遇到过这个问题,在ubuntu上跑的好好的,在一台windows上也跑得好好的,文件open时加的编码是utf-8,在另一台国外电脑上就报这个错 ![image](https://user-images.githubusercontent.com/69768456/228180035-20d0f7f9-a67b-4f3f-8fca-0a86ed507f12.png)

我使用了一块RTX4090分别训练了中文和英文的OntoNote数据,得到下面的结果 中文:RoBERTa_zh_L12_PyTorch ![image](https://user-images.githubusercontent.com/69768456/228712981-f4b4357d-ba0f-4101-aaa3-74e44438d219.png) 英文:spanbert_base ![image](https://user-images.githubusercontent.com/69768456/228712779-6616aaf4-29b8-46ff-99c6-a1cd9f257f1a.png) 使用参数如下: `# Computation limits. max_top_antecedents = 50 max_training_sentences = 11 top_span_ratio = 0.4 max_num_speakers = 20 max_segment_len = 128 # Learning bert_learning_rate = 1e-05 task_learning_rate...

看到不少人问这个问题,在下亲自实践,添加个issue吧 1. 打开这个文件目录,做如下设置 ![image](https://user-images.githubusercontent.com/69768456/227858961-7a118e75-95a3-48b2-a936-bed9a9b3739c.png) 2. 我还执行了`sudo chmod 777 ./scorer.pl`不知是否是必要的,但是两步设置完跑通了 ![image](https://user-images.githubusercontent.com/69768456/227859367-4908f780-9a02-4400-ab4c-21d72e67d367.png) 附:windows下没试过,想来更改读写权限,是一样的道理