LMFlow
LMFlow copied to clipboard
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
call stack diagram for dataset
# Description 1. LMFlow now defaults to using Accelerate (i.e, run scripts using `accelerate launch ... finetune.py ...`). If you prefer to use deepspeed (`deepspeed ... finetune.py ...`) or accelerate...
**Describe the bug** bash vllm_inference.sh --model_name_or_path ./Llama-2-7b-hf --dataset_path data/alpaca/test_conversation --output_dir data/inference_results [2025-04-03 14:16:08,168] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect) Consider install flash_attn for better performance. Checking dataset keys:...
> { > > "input": "###Instruction: ....\n\n###human: ....\n\n###chatbot: ....\n\n###human: ....\n\n###chatbot: ....\n\n###human: .....\n\n###chatbot:", > > > > "output": ".....###" > > } > > > > Thank you very much for...
~/LMFlow$ cd data && ./download.sh alpaca && cd - downloading alpaca dataset --2025-03-23 00:08:19-- http://lmflow.org:5000/alpaca.tar.gz Resolving lmflow.org (lmflow.org)... 107.23.182.175 Connecting to lmflow.org (lmflow.org)|107.23.182.175|:5000... failed: Connection timed out. Retrying. --2025-03-23 00:10:31--...
**Describe the bug** When using qwen 2.5 series templates, tokenizer.apply_chat_template throw Jinja error **To Reproduce** Steps to reproduce the behavior: 1. Create a conversation dataset without a system prompt 2....
**Describe the bug** Run raft align on Qwen2.5 and meet error ``` File "LMFlow/src/lmflow/pipeline/auto_pipeline.py", line 68, in get_pipeline [rank5]: raise NotImplementedError( [rank5]: NotImplementedError: Please install the necessary dependencies to use...
Hi, thank you for your contributions. I'm considering aligning multimodal models with LMFlow. However, I only found run_finetune_multi_modal_stage1.sh related to multimodal in the GitHub repository, and didn't find multimodal training...
Thank you for your wonderful work. It is really an amazing framework for finetuning LLMs. However, I am curious about the difference between LMFlow and LLaMA Factory, which is another...
**Describe the bug** As I mentioned in this [issue](https://github.com/huggingface/transformers/issues/35045), the default value of `top_p` and `temperature` is not guaranteed to be `1`. Therefore, the code below will get a modified...