xtuner icon indicating copy to clipboard operation
xtuner copied to clipboard

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

Results 265 xtuner issues
Sort by recently updated
recently updated
newest added

我只用github上提供的配置文件yi_34b_200k_full_alpaca_enzh_32k_sp8,运行时的deepspeed选项是zero3_offload 但是出现如下错误,请问现在序列并行是不支持offload吗,还是有别的原因? 谢谢。 Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/opt/ml/job/xtuner/tools/train.py", line 342, in File "/opt/ml/job/xtuner/tools/train.py", line 342, in Traceback (most...

deepspeed
sequence parallel

是否会支持:llava 1.6+llama3 70B的模型呢?

Requirements: 1. transformers >= 4.39.0 2. peft >= 0.10.0

In order to enhance the flexibility and usability of the multimodal model, it is necessary to refactor the existing llava code. -

support InternVL1.5

feature request

llava 1.6现在很火啊 近期是否考虑支持llava 1.6 34B 的模型微调呢?

例如,能不能使用一轮对话+2张图像+bbox这样的训练方式进行训练?

feature request