1SingleFeng
1SingleFeng
I have some similar questions, can anyone help me solve them?
Hi, I face the same problem.
Is there any progress on this matter? I am very curious as to what led to this result.
I have the same problem
> > ref: > > > > * https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md > > * [support ORPO algorithm modelscope/swift#854](https://github.com/modelscope/swift/pull/854) > > thx, it works! 请问modelscope swift好用吗
请问这个问题解决了吗,我现在也遇到一些问题,lora微调后模型学到的知识效果也很差,经常产生幻觉,比如一些参数性信息,比如,一个物体的尺寸,经常开始瞎报,训练时可能是5.4厘米,推理时可能是其它尺寸。
你好,请问autoawq 0.2.5支持llava 1.5吗,能给一下示例代码吗,要求的最低transformers版本是什么?
> > 你好,请问autoawq 0.2.5支持llava 1.5吗,能给一下示例代码吗,要求的最低transformers版本是什么? > > 你要不直接用示例的官方示例+我的那个代码试试?应该能运行起来。 下面是两个链接,一个是原始的llava-v1.5pr #250 ,一个是新的等待合并的pr #471 好的,非常感谢,我将尝试一下
@WanBenLe 你好,我在尝试[AutoAWQ-with-llava-v1.6](https://github.com/WanBenLe/AutoAWQ-with-llava-v1.6) 时需要如下问题,请问你知道如何解决吗 Traceback (most recent call last): File "/home/common/singlefeng/AIGC_TRAIN/AutoAWQ-with-llava-v1.6_20240624/quantize_llava.py", line 22, in model.quantize(tokenizer, quant_config=quant_config) File "/home/common/anaconda3/envs/auto_awq/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/common/singlefeng/AIGC_TRAIN/AutoAWQ-with-llava-v1.6_20240624/awq/models/base.py", line 181, in quantize...