InternVL icon indicating copy to clipboard operation
InternVL copied to clipboard

[Bug] TypeError: TrainerState.__init__() got an unexpected keyword argument 'stateful_callbacks'

Open humphery0sh opened this issue 1 year ago • 1 comments

Checklist

  • [ ] 1. I have searched related issues but cannot get the expected help.
  • [ ] 2. The bug has not been fixed in the latest version.
  • [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

微调被异常中断后不能从checkpoint处继续,其中日志中提到的 stateful_callbacks在trainer_state.json文件中如下

... ... "logging_steps": 1.0, "max_steps": 71856, "num_input_tokens_seen": 0, "num_train_epochs": 2, "save_steps": 3000, "stateful_callbacks": { "TrainerControl": { "args": { "should_epoch_stop": false, "should_evaluate": false, "should_log": false, "should_save": true, "should_training_stop": false }, "attributes": {} } }, "total_flos": 4326123668658176.0, "train_batch_size": 25, "trial_name": null, "trial_params": null }

Reproduction

` deepspeed --include localhost:1
llava/train/train_mem.py
--deepspeed ./scripts/zero3.json
--model_name_or_path $DATA_HOME/pretrained_mm_projector/vicuna-7b-v1.5
--version v1
--lora_enable True
--data_path $DATA_HOME/LLaVA-Finetune/enhanced_llava_sft_data_898K-fixed.json
--image_folder $DATA_HOME/LLaVA-Finetune/images
--vision_tower $DATA_HOME/pretrained_mm_projector/InternViT-300M-448px
--pretrain_mm_mlp_adapter $PRETRAIN_OUTPUT_DIR/mm_projector.bin
--mm_projector_type mlp2x_gelu
--mm_vision_select_layer -4
--mm_use_im_start_end False
--mm_use_im_patch_token False
--image_aspect_ratio pad
--group_by_modality_length True
--bf16 True
--output_dir ${OUTPUT_DIR}/sft
--num_train_epochs 2
--per_device_train_batch_size 25
--per_device_eval_batch_size 2
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 3000
--save_total_limit 3
--learning_rate 2e-5
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1
--tf32 True
--model_max_length 2560
--gradient_checkpointing True
--dataloader_num_workers 4
--lazy_preprocess True
--report_to "tensorboard"
--resume_from_checkpoint ${OUTPUT_DIR}/sft/checkpoint-45000
| tee ${OUTPUT_DIR}/train.log

`

Environment

sys.platform: linux
Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5: NVIDIA A100-SXM4-80GB
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.99
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.3.1+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.4.1  (built against CUDA 11.6)
    - Built with CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.18.1+cu121
LMDeploy: 0.5.3+6a230b3
transformers: 4.37.2
gradio: 3.35.2
fastapi: 0.111.1
pydantic: 2.8.2
triton: 2.3.1
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV12    SYS     SYS     SYS     SYS     SYS     SYS     0-23,48-71      0               N/A
GPU1    NV12     X      SYS     SYS     SYS     SYS     SYS     SYS     0-23,48-71      0               N/A
GPU2    SYS     SYS      X      NV12    SYS     SYS     SYS     SYS     0-23,48-71      0               N/A
GPU3    SYS     SYS     NV12     X      SYS     SYS     SYS     SYS     0-23,48-71      0               N/A
GPU4    SYS     SYS     SYS     SYS      X      NV12    SYS     SYS     24-47,72-95     1               N/A
GPU5    SYS     SYS     SYS     SYS     NV12     X      SYS     SYS     24-47,72-95     1               N/A
NIC0    SYS     SYS     SYS     SYS     SYS     SYS      X      PIX
NIC1    SYS     SYS     SYS     SYS     SYS     SYS     PIX      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

Error traceback

use LN for projection:  False
Loading mm_projector weights...
Formatting inputs...Skip in lazy mode
/home/anaconda3/envs/internvl/lib/python3.11/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
  warnings.warn(
work_dirs/pretrain_internvit6b_448_vicuna7b/sft/checkpoint-48000/trainer_state.json
8439839
<class 'transformers.trainer_callback.TrainerState'>
Traceback (most recent call last):
  File "/home/rag/internvl/internvl_chat_llava/llava/train/train_mem.py", line 13, in <module>
    train(attn_implementation="flash_attention_2")
  File "/home/rag/internvl/internvl_chat_llava/llava/train/train.py", line 969, in train
    trainer.train(resume_from_checkpoint=True)
  File "/home/anaconda3/envs/internvl/lib/python3.11/site-packages/transformers/trainer.py", line 1513, in train
    state = TrainerState.load_from_json(os.path.join(resume_from_checkpoint, TRAINER_STATE_NAME))
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/anaconda3/envs/internvl/lib/python3.11/site-packages/transformers/trainer_callback.py", line 126, in load_from_json
    return cls(**tmp)
           ^^^^^^^^^^
TypeError: TrainerState.__init__() got an unexpected keyword argument 'stateful_callbacks'
[2024-09-04 15:25:48,350] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 2878546

humphery0sh avatar Sep 04 '24 10:09 humphery0sh

您好,我们在resume的时候没有遇到过这种错误,看起来您可以试一下把trainer_state.json文件中的stateful_callbacks字段去掉

Weiyun1025 avatar Sep 10 '24 16:09 Weiyun1025