blog icon indicating copy to clipboard operation
blog copied to clipboard

error with ''from transformers import Seq2SeqTrainingArguments''

Open freeexit2002 opened this issue 1 year ago • 1 comments

I would greatly appreciate your help with this error. Here is [tutorial] (https://huggingface.co/blog/fine-tune-whisper) that i followed. Thanks in advance.

''from transformers import Seq2SeqTrainingArguments

training_args = Seq2SeqTrainingArguments( output_dir="./whisper-large-v2-el", # change to a repo name of your choice per_device_train_batch_size=16, gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size learning_rate=1e-5, warmup_steps=500, max_steps=4000, gradient_checkpointing=True, fp16=True, evaluation_strategy="steps", per_device_eval_batch_size=8, predict_with_generate=True, generation_max_length=225, save_steps=1000, eval_steps=1000, logging_steps=25, report_to=["tensorboard"], load_best_model_at_end=True, metric_for_best_model="wer", greater_is_better=False, push_to_hub=True, )



ImportError Traceback (most recent call last)

in <cell line: 3>() 1 from transformers import Seq2SeqTrainingArguments 2 ----> 3 training_args = Seq2SeqTrainingArguments( 4 output_dir="./whisper-small-hi", # change to a repo name of your choice 5 per_device_train_batch_size=16,

4 frames

/usr/local/lib/python3.9/dist-packages/transformers/training_args_seq2seq.py in init(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, xpu_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, dataloader_pin_memory, skip_mem...

/usr/local/lib/python3.9/dist-packages/transformers/training_args.py in post_init(self) 1253 self.framework == "pt" 1254 and is_torch_available() -> 1255 and (self.device.type != "cuda") 1256 and (get_xla_device_type(self.device) != "GPU") 1257 and (self.fp16 or self.fp16_full_eval)

/usr/local/lib/python3.9/dist-packages/transformers/training_args.py in device(self) 1631 """ 1632 requires_backends(self, ["torch"]) -> 1633 return self._setup_devices 1634 1635 @property

/usr/local/lib/python3.9/dist-packages/transformers/utils/generic.py in get(self, obj, objtype) 52 cached = getattr(obj, attr, None) 53 if cached is None: ---> 54 cached = self.fget(obj) 55 setattr(obj, attr, cached) 56 return cached

/usr/local/lib/python3.9/dist-packages/transformers/training_args.py in _setup_devices(self) 1533 logger.info("PyTorch: setting up devices") 1534 if not is_sagemaker_mp_enabled() and not is_accelerate_available(check_partial_state=True): -> 1535 raise ImportError( 1536 "Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate" 1537 )

ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate


NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the "Open Examples" button below.

freeexit2002 avatar Apr 25 '23 21:04 freeexit2002

firstly

!pip uninstall accelerate !pip install accelerate==23.0

after that you have to restart runtime.

bbietzsche avatar Oct 19 '23 07:10 bbietzsche