whcao

Results 53 comments of whcao

问下你设置的梯度累积值是多少呢?另外问下您的显卡显存是多少呢,我看log里打印的是小于8G的显存占用

我确认下哈,您这边用的配置是 Yi34B + 32k seq length + sequence parallel size 4 (8) + deepspeed zero3 offload 吧? 我这边尝试复现下您的问题

我复现出来您的问题了,我进行了两组实验,都使用的是 yi 34b + deepspeed zero3 offload + 8 * A100 80G (1T mem): 1. 8k seq len + sequence parallel 4 + grad acc 4 2. 2k seq len...

Apologies for any inconvenience caused during your use. Please install xtuner>=0.1.18 and rerun the experiment. The issue mentioned above may have been addressed in pull request #567 . If you...

> 加载模型 & Chat 用例:`xtuner/model/auto.py` > > 训练 alpaca > > ``` > # 把 alpaca 数据集转换为 openai 格式 json > python xtuner/tools/convert_dataset.py tatsu-lab/alpaca alpaca --save-dir converted_alpaca > xtuner train...

[pr 567](https://github.com/InternLM/xtuner/pull/567) 的修改需要同步

Hi @amulil ! Please provide the config or log file corresponding to this picture. BTW, have you installed flash_attn? ![image](https://github.com/InternLM/xtuner/assets/41630003/fac89657-c1b6-4852-8422-54e2fe7ec289)

Hi! Currently, we only support getting cfg files from installed packages. Maybe you can install mmcls from source: ``` cd /home/s316/workspace/xionggl/Experiment/mmclassification pip install -v -e . ``` and then using...

Apologies for the delayed response. At present, we advise utilizing the codes from the main branch as our recommendation. To use inheriting strategy, you can first copy the model config...

Sorry for the delayed response. It seems that you executed the codes in master branch. Currently, we recommend using the codes from the main branch as our preferred option, and...