20191864218
20191864218
我使用llava的框架,把LLM替换成其他的大语言模型,运行过程中报错TypeError: pad_sequence(): argument 'padding_value' (position 3) must be float, not NoneType具体错误如下  请问各位大佬具体是哪里的问题
### Question 各位大佬,我把llava的LLM换成由Qwen,重新预训练的时候出现这个错误,请问怎么解决
### Question _No response_
### Question When I replace the vision encoder of LLaVA with MedCLIP, the following error occurs. `RuntimeError: Error(s) in loading state_dict for MedCLIPModel: Unexpected key(s) in state_dict: "text_model.model.embeddings.position_ids". `
### Question If I introduce a new package in clip_encoder.py, I get this error. What should I do?Thanks!
大佬您好,我在用咱们提供的openi数据集对VisualGLM进行微调之后,检测模型的推理能力的时候,出现以下情况  是不是过拟合太严重了,以下是我的微调参数 ``` #! /bin/bash NUM_WORKERS=1 NUM_GPUS_PER_WORKER=4 MP_SIZE=1 script_path=$(realpath $0) script_dir=$(dirname $script_path) main_dir=$(dirname $script_dir) MODEL_TYPE="visualglm-6b" MODEL_ARGS="--max_source_length 64 \ --max_target_length 256 \ --lora_rank 10 \ --pre_seq_len 4" # OPTIONS_SAT="SAT_HOME=$1" #"SAT_HOME=/raid/dm/sat_models"...
Traceback (most recent call last): File "/root/VisualGLM-6B/finetune_XrayGLM.py", line 194, in training_main(args, model_cls=model, forward_step_function=forward_step, create_dataset_function=create_dataset_function, collate_fn=data_collator) File "/root/miniconda3/lib/python3.10/site-packages/sat/training/deepspeed_training.py", line 67, in training_main train_data, val_data, test_data = make_loaders(args, hooks['create_dataset_function'], collate_fn=collate_fn) File "/root/miniconda3/lib/python3.10/site-packages/sat/data_utils/configure_data.py",...
Traceback (most recent call last): File "/root/VisualGLM-6B/finetune_XrayGLM.py", line 194, in training_main(args, model_cls=model, forward_step_function=forward_step, create_dataset_function=create_dataset_function, collate_fn=data_collator) File "/root/miniconda3/lib/python3.10/site-packages/sat/training/deepspeed_training.py", line 67, in training_main train_data, val_data, test_data = make_loaders(args, hooks['create_dataset_function'], collate_fn=collate_fn) File "/root/miniconda3/lib/python3.10/site-packages/sat/data_utils/configure_data.py",...
大佬们好,我在用openi数据集(大概6500条数据)对VisualGLM进行微调之后,检测模型的推理能力的时候,出现以下情况  是不是过拟合太严重了,以下是我的微调参数 ``` #! /bin/bash NUM_WORKERS=1 NUM_GPUS_PER_WORKER=4 MP_SIZE=1 script_path=$(realpath $0) script_dir=$(dirname $script_path) main_dir=$(dirname $script_dir) MODEL_TYPE="visualglm-6b" MODEL_ARGS="--max_source_length 64 \ --max_target_length 256 \ --lora_rank 10 \ --pre_seq_len 4" # OPTIONS_SAT="SAT_HOME=$1" #"SAT_HOME=/raid/dm/sat_models"...