ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG/Help] ptuning的时候报错Unknown argument(s): {'delay': 5}

Open gaofeiseu opened this issue 1 year ago • 0 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

运行bash trainer.sh之后的完整日志如下

2023-04-25 11:10:57.993074: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 04/25/2023 11:10:59 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 04/25/2023 11:10:59 - INFO - main - Training/evaluation parameters Seq2SeqTrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=False, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, generation_max_length=None, generation_num_beams=None, gradient_accumulation_steps=16, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.02, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/chatglm/output/adgen-chatglm-6b-pt-128-2e-2/runs/Apr25_11-10-59_b22a5e66ea7a, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=10, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=3000, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=3.0, optim=adamw_hf, optim_args=None, output_dir=/root/chatglm/output/adgen-chatglm-6b-pt-128-2e-2, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=1, per_device_train_batch_size=1, predict_with_generate=True, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/chatglm/output/adgen-chatglm-6b-pt-128-2e-2, save_on_each_node=False, save_steps=1000, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, sortish_sampler=False, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-efe4449fed184d09/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16384.00it/s] Traceback (most recent call last): File "main.py", line 431, in main() File "main.py", line 100, in main raw_datasets = load_dataset( File "/root/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/root/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/root/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/miniconda3/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 49, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/root/miniconda3/lib/python3.8/site-packages/datasets/download/download_manager.py", line 564, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/miniconda3/lib/python3.8/site-packages/datasets/download/download_manager.py", line 442, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/root/miniconda3/lib/python3.8/site-packages/datasets/download/download_manager.py", line 341, in _record_sizes_checksums for url, path in tqdm( File "/root/miniconda3/lib/python3.8/site-packages/datasets/utils/logging.py", line 206, in call return tqdm_lib.tqdm(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/tqdm/asyncio.py", line 21, in init super(tqdm_asyncio, self).init(iterable, *args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 997, in init raise ( tqdm.std.TqdmKeyError: "Unknown argument(s): {'delay': 5}"

Expected Behavior

No response

Steps To Reproduce

如上

Environment

- OS: Ubuntu 7.5.0-3ubuntu1~18.04
- Python: Python 3.8.5
- Transformers: 4.27.1
- PyTorch: 2.0.0+cu117
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True

Anything else?

No response

gaofeiseu avatar Apr 25 '23 03:04 gaofeiseu