LMFlow
LMFlow copied to clipboard
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Where are the beginning and end tokens of text added? I am looking for the start token and end token in the code where are they added. But i can...
I would like to train a chatbot with the Lora fine-tune on my own datasets. I used the 'text2text' structure, putting all questions in order as input and all answers...
Hello, Can we finetune model using qlora with LMFlow? (Cf: https://github.com/artidoro/qlora) Thanks
Hi, thanks for your @shizhediao open-source code sharing. **Is your feature request related to a problem? Please describe.** Yes, there is currently no easy way to interact with LoRA-based chatbots...
你好,我尝试运行RAFT的demo时,在运行run_raft_align.sh时,出现了上述的问题,我并没有对数据集以及地址进行任何的改动,请问这个问题的原因是什么呢?
After finetuning the bigscience/bloomz-7b1, I encountered this issue while doing evaluation. │ envs/lmflow/lib/python3.9/site-packages/peft/utils/save_and_load.py:74 │ │ in set_peft_model_state_dict │ │ │ │ 71 │ │ peft_model_state_dict (`dict`): The state dict of...
pretrain is trust_remote_code error。
The link to join Slack in the readme is broken. It would be great if you could fix it for the purpose of discussing lightweight improvements or for communication related...
I use wsl2 with Ubuntu 22.04 to make the envvironment. when I run ./scripts/run_finetune.sh it print the above error, but I can successfully run ./scripts/run_finetune_with_lora.sh with both gpt2 and robin-7B....
运行app.py脚本时,出现了这个错误:There is something wrong, please query again。 报错信息如下: 微调后生成的文件在截图里所示的目录下,那么,运行app.py时,model_name_or_path的值如何指向微调后的模型,这个值应该是什么呢? 那如果是lora微调,情况又是怎么样呢?