LMFlow icon indicating copy to clipboard operation
LMFlow copied to clipboard

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

Results 177 LMFlow issues
Sort by recently updated
recently updated
newest added

Added a new evaluation metric called ROUGE-L (https://github.com/yizhongw/self-instruct). To apply: run_evaluation_with_rougel.sh Test case: rougel_test_case.sh

**Is your feature request related to a problem? Please describe.** I notice there are script for lora base finetune and evaluation, but not for prompt tuning. **Describe the solution you'd...

Support image encoder with image caption as example. Try model: BLIP with Salesforce/blip-image-captioning-bas Discussion: + the name of arch_type: visionEncoder_decoder + format the data with image_text + Should we generate...

... Loading extension module cpu_adam... Traceback (most recent call last): File "/home/mahongli/LMFlow/examples/finetune.py", line 61, in main() File "/home/mahongli/LMFlow/examples/finetune.py", line 57, in main tuned_model = finetuner.tune(model=model, dataset=dataset) File "/home/mahongli/LMFlow/src/lmflow/pipeline/finetuner.py", line 285,...

bug

**Describe the bug** tokenizer map in `hf_decoder_model` use multi `preprocessing_num_workers` will return `TypeError: cannot pickle 'torch._C._distributed_c10d.ProcessGroup' object` **To Reproduce** Steps to reproduce the behavior: add `--preprocessing_num_workers 20 \` to `scripts/run_finetune.sh`...

i user scripts/run_evaluation_with_lora.sh CUDA_VISIBLE_DEVICES=0 \ deepspeed examples/evaluate.py \ --answer_type text \ --model_name_or_path output_models/llama-7b-hf \ --lora_model_path output_models/instruction_ckpt/llama7b-lora \ --dataset_path data/alpaca/test \ --prompt_structure "Input: {input}" \ --deepspeed examples/ds_config.json ~ then result 2023-06-10...

我看到 run_finetune_with_lora_save_aggregated_weights.sh中 有以下参数 --do_train \ --do_eval \ --evaluation_strategy "steps" \ --eval_steps 1000 \ --eval_dataset_path ${eval_dataset_path} \ 是否可以 修改 run_finetune_with_lora.sh 加入上述参数,从而实现一边训练一边评估 我尝试了下,抛以下错误: ![image](https://github.com/OptimalScale/LMFlow/assets/102452590/59df0995-80d6-4b12-adef-a9f57e77b79d) 能否解惑,谢谢!

I edit the configurations of finetune.py in pycharm as below: ======================================= --model_name_or_path facebook/galactica-1.3b --dataset_path /root/LMFlow/data/alpaca/train --output_dir /root/LMFlow/output_models/finetune_with_lora --overwrite_output_dir --num_train_epochs 0.01 --learning_rate 1e-4 --block_size 512 --per_device_train_batch_size 1 --use_lora 1 --lora_r 8...

when you meet this problem, you have to check your transformers first. Further, you may need to change the following code: file: transformers/src/transformers/models/llama/modeling_llama.py **change** ``` def apply_rotary_pos_emb(q, k, cos, sin,...

作者好,我的报错情况如下图所示,我的显卡是3090,使用的是docker容器运行,就是从docker hub上拉取的镜像。 ![image](https://github.com/OptimalScale/LMFlow/assets/86297268/d6d66022-ea53-4551-98b7-194fa8c14e08)

bug