LMFlow icon indicating copy to clipboard operation
LMFlow copied to clipboard

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

Results 177 LMFlow issues
Sort by recently updated
recently updated
newest added

if not data_args.streaming: lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=group_batch_size, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, desc=f"Grouping texts in chunks of {block_size}", ) funetuner.py中group_texts方法,在处理最后一个batch的时候卡住,进度条一直停在百分之90多 ![image](https://user-images.githubusercontent.com/51204375/230752796-3d2993a0-fed9-47fc-a1d4-3ee3ff2622bd.png)

Added new features: 1. encoder-decoder architecture fine-tuning (e.g., T5-based model) 2. ChatGLM inference 3. Vicuna inference

Add the finetuner without trainer api, mainly src/lmflow/pipeline/finetuner_no_trainer.py and corresponding scripts/run_finetune_no_trainer.sh. For the switch between finetuner and finetuner_no_trainer, you need to change examples/finetune.py line 36. I think there should be...

看到你们展示的都是英文的交互界面,请问是否可以用中文来训练? 这里也有一个问题,如果基于LLaMa训练的话,中文instruct是否有效?期待回答。谢谢。

[2023-04-07 13:57:13,994] [WARNING] [runner.py:186:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. Detected CUDA_VISIBLE_DEVICES=0: setting --include=localhost:0 [2023-04-07 13:57:14,006] [INFO] [runner.py:550:main] cmd = /opt/miniconda3/bin/python -u -m deepspeed.launcher.launch...

1. update deepspeed inference 2. only main process require input

just for reference

In some circumstance, Huggingface is blocked from being downloaded directly, e.g. firewall. Is it feasible to add an option to load local HuggingFace model rather than downloading from hub directly?...

![image](https://github.com/OptimalScale/LMFlow/assets/102452590/92aad9fa-2d60-4b3c-ab8f-52b298c31db2) CUDA_VISIBLE_DEVICES=0 \ deepspeed examples/chatbot.py \ --deepspeed configs/ds_config_chatbot.json \ --use_ram_optimized_load False\ --model_name_or_path ${model} \ --max_new_tokens 100 \ --lora_model_path /home/cd/ai/LMFlow/output_models/finetune_with_lora \ --prompt_structure "A chat between a curious human and an artificial...

**Describe the bug** --disable_group_texts True results in the following error

bug

Hi, I actually described the problem in the following issuecomment shortly, and I am submiting this issue as a new one for a more complete description. https://github.com/OptimalScale/LMFlow/issues/431#issuecomment-1596966261 While initiating the...