EvelynRYW

Results 15 comments of EvelynRYW

Hello, @[[ThiloteE](https://github.com/ThiloteE)] , our group are interested in your problem and would like to work on this issue. Can we have a try? ——SE 2022 group haha, SUSTech

And for result, simply show! ![G7XGBAA`YQYNEG{H3V%5$6C](https://user-images.githubusercontent.com/74488290/170689766-c5c469bd-3588-4584-970d-ef83f631da52.JPG) ![4`_022PM)2R69)ZBHX1 U)T](https://user-images.githubusercontent.com/74488290/170689768-5f159bcd-3691-49c1-bbe3-dc1510e3846e.JPG)

Thanks, but also, I just take a example, not complete!

Hello, @[alycecil](https://github.com/alycecil) , our group are interested in your problem and would like to work on this issue. Can we have a try? ——SE 2022 group haha, SUSTech

Hello, @[[claell](https://github.com/claell)] , our group are interested in your problem and would like to work on this issue. Can we have a try? ——SE 2022 group haha, SUSTech

想问下 目前1.5版本的VG任务微调,prompt版本还是一样么?在上传的playground中,sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.jsonl文件里面涉及的coco定位任务prompt似乎并没有的tag?

@hjh0119 想问下 按照您的教程里,我想对int8版本进行微调,替换掉了--model_type internvl-chat-v1_5-int8 模型名称,加载了从huggingface下载的模型权重,但报了如下的错误,似乎和int8有些关系,对与1.5的版本则不会报错,可以正常微调,想问下这是需要加什么特殊的参数影响么? ``` /data2/renyw/InstallationPackage/anaconda3/envs/swift/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:316: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")...

> @MVP-D77 不需要替换model_type, 使用本地模型文件在--model_id_or_path指定路径, 再指定--model_type internvl-chat-v1_5-int8. @hjh0119 您好,这是我的微调命令,请您看一下有没有问题,它还是报错,和上面报错信息一样,同样的数据集,internvl-1.5就不会报错,另外想问下,flash_attn如果想开启的话,是加一个设置为true么,我看打印的log文件里面显示use_flash_attn 为null ``` CUDA_VISIBLE_DEVICES=0,1 swift sft --model_type internvl-chat-v1_5-int8 --dataset coco-mini-en-2 --model_id_or_path xxxxxxxxx/InternVL/pretrained/InternVL-Chat-V1-5-Int8 ```

> > @hjh0119 您好,这是我的微调命令,请您看一下有没有问题,它还是报错,和上面报错信息一样,同样的数据集,internvl-1.5就不会报错,另外想问下,flash_attn如果想开启的话,是加一个设置为true么,我看打印的log文件里面显示use_flash_attn 为null > > 可以复现,修复中 开启flash-attention: --use_flash_attn true 关于swift的问题欢迎到swift提issue @hjh0119 谢谢您的回复,也感谢您的工作,我还有两个新的有关internvl微调的问题,已在swift仓库提出issue [https://github.com/modelscope/swift/issues/925](https://github.com/modelscope/swift/issues/925) 非常期待您的回复

> 感谢大佬们关注,我这两天会准备一个微调的示例脚本。 请问现在有可以用于微调1.5或者1.5-int8的脚本了嘛