zhanglv0209

Results 4 issues of zhanglv0209

# 运行出现了故障,然后重新执行 # cmd /mntenv/llama_etuning/bin/deepspeed --include localhost:4,5,6,7 --master_port=9101 src/train_bash.py \ --deepspeed ds_config.json \ --stage rm \ --do_train \ --model_name_or_path /mnodel/llama2-Chinese-7b-Chat \ --dataset comparison_gpt4_zh \ --template llama2 \ --finetuning_type lora \...

good first issue
pending

![image](https://github.com/huggingface/trl/assets/16017651/02321ee4-d85f-4140-841c-bea1f539442a) Without passing data_collator, there won't be DataCollatorForCompletionOnlyLM. Additionally, due to packing=True, data_collator won't default to DataCollatorForLanguageModeling, hence _prepare_dataset returns directly. Since it inherits from transformers Trainer, it's observed that...

``` from typing import Any ​ from pydantic import BaseModel from unstructured.partition.pdf import partition_pdf ​ # Path to save images path = "/home/zhanglv/code/small-project/data/" filename =path + "LLaVA.pdf" # Get elements...

pdf

sft cmd: /mnt/nvme0n1/zhang/venv/small_project/bin/torchrun --nproc_per_node=4 supervised-fine-tune-qlora.py \ --model_name_or_path /mnt/nvme0n1/zhang/model/llama-2-7b-chat-hf \ --bf16 True \ --output_dir /mnt/nvme1n1/zhang/model/out/sft/llama-2-7b-chat-hf-qlore-20231120 \ --model_max_length 32768 \ --use_flash_attn True \ --data_path /mnt/nvme1n1/zhang/data/LongAlpaca-12k.json \ --low_rank_training True \ --num_train_epochs 3 \...