LMFlow icon indicating copy to clipboard operation
LMFlow copied to clipboard

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

Results 177 LMFlow issues
Sort by recently updated
recently updated
newest added

Do you have the plan to support InternLM-7B & InternLM-20B which is similar to LLaMA model? (https://github.com/InternLM/InternLM) Thanks!

Recently I use the github "https://github.com/OptimalScale/LMFlow" to clone on my desktop computer . I use command "git clone XXXXX" on my "D://",Then ``Error downloading object: assets/multimodal-chatbot-demo.gif (2062965): Smudge error: Error...

您好。我尝试本地部署模型, app.py中加入如下代码才能正常运行, model args.model name or path = '/home/xfwl/huggingface/galactica-1.3b 部署完成后输入问题,机器人回答的是输入的内容,且有时还会回复代码。 http://lmflow.com 网页中微信群和discord链接全部都失效了,想进入社群学习。且这个网页的机器人不能对话。

您好,利用工程自带数据集对Llama7b进行Lora训练,数据集如下图所示 训练后app.py参数如下设置 对话显示如下所示

Hi, I am using LMFlow to invoke the model codellama/CodeLlama-7b-Instruct-hf. However, I found that the output was very repetitive. It seems that the default temperature setting is current set to...

你们好,我自己有一份私有大模型,预训练的时候,输入和输出之间的分隔符是"[SEP]",输出终止token是“\”。现在我想借助lmflow进行微调。我注意到,数据格式只能是text_only 和text2text,请问text_only 和text2text 在使用上有什么区别? 我应该如何构造我的数据集,才能使用你们的微调和推理脚本呢? 比如,我的数据格式是: ``` 问:能否帮我写一个python代码判断一个数字是否是偶数?答:is_even = lambda x: x % 2 == 0 ```

**Is your feature request related to a problem? Please describe.** I am very interested in this project, you guys did a great job. And I wonder if there is a...

Hey, our team is trying to recreate the RAFT (RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment) paper HH-RLHF benchmarks with Llama-7b. We successfully did the SFT step, however,...

With tons of experiments and tests, we finally support iterative dpo within a python script. Other useful features come alongside with iterative dpo: 1. Multi instance vllm inference (using ray)...

Hi, I attempted to use speculative decoding but encountered some errors. May I ask for your assistance? I used the parameters from the first example. python ./examples/speculative_inference.py \ --model gpt2-xl...