transformers
transformers copied to clipboard
Fine-tuning wav2vec 2.0 with `torch.compile`
System Info
-
transformers
version: 4.28.1 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.0
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Who can help?
No response
Information
- [x] The official example scripts
- [ ] My own modified scripts
Tasks
- [X] An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
python run_audio_classification.py \
--model_name_or_path facebook/wav2vec2-base \
--dataset_name superb \
--dataset_config_name ks \
--output_dir wav2vec2-base-ft-keyword-spotting \
--overwrite_output_dir \
--remove_unused_columns False \
--do_train \
--do_eval \
--fp16 \
--learning_rate 3e-5 \
--max_length_seconds 1 \
--attention_mask False \
--warmup_ratio 0.1 \
--num_train_epochs 5 \
--per_device_train_batch_size 32 \
--gradient_accumulation_steps 4 \
--per_device_eval_batch_size 32 \
--dataloader_num_workers 4 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--metric_for_best_model accuracy \
--save_total_limit 3 \
--seed 0 \
+ --torch_compile True
Expected behavior
I followed the example to fine-tune wav2vec 2.0 for audio classification, with the exception of using torch.compile
, aiming to get faster training. However, I ran to an issue as follows
Error Log
[INFO|trainer.py:1769] 2023-04-19 05:28:50,832 >> ***** Running training *****
[INFO|trainer.py:1770] 2023-04-19 05:28:50,832 >> Num examples = 51,094
[INFO|trainer.py:1771] 2023-04-19 05:28:50,832 >> Num Epochs = 5
[INFO|trainer.py:1772] 2023-04-19 05:28:50,832 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1773] 2023-04-19 05:28:50,832 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:1774] 2023-04-19 05:28:50,833 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1775] 2023-04-19 05:28:50,833 >> Total optimization steps = 1,995
[INFO|trainer.py:1776] 2023-04-19 05:28:50,834 >> Number of trainable parameters = 90,371,212
0%| | 0/1995 [00:00<?, ?it/s]/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/feature_extraction_utils.py:165: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
[2023-04-19 05:28:54,741] torch._inductor.utils: [WARNING] using triton random, expect difference from eager
Traceback (most recent call last):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/__init__.py", line 1390, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 455, in compile_fx
return aot_autograd(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2805, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2498, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1713, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2087, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward, aot_config.decompositions)(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn
tree_out = root_fn(*tree_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1156, in traced_joint
return functionalized_f_helper(primals, tangents)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1108, in functionalized_f_helper
f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1076, in flat_fn_no_input_mutations
outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1048, in flat_fn_with_synthetic_bases_expanded
outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1017, in forward_or_joint
backward_out = torch.autograd.grad(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 269, in grad
return handle_torch_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 303, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 487, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 512, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 345, in proxy_call
out = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1162, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 453, in index_tensor
check_no_bool_index_tensors(func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 432, in check_no_bool_index_tensors
raise DynamicOutputShapeException(func)
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.index.Tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 418, in <module>
main()
File "/home/wilson_bookbotkids_com/run_audio_classification.py", line 392, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2699, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2731, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1817, in forward
outputs = self.wav2vec2(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1316, in forward
hidden_states = self._mask_hidden_states(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1249, in _mask_hidden_states
if not getattr(self.config, "apply_spec_augment", True):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1259, in <graph break in _mask_hidden_states>
mask_time_indices = _compute_mask_indices(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1266, in <graph break in _mask_hidden_states>
mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE
self.output.compile_subgraph(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 517, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised DynamicOutputShapeException: aten.index.Tensor
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
I suspect that wav2vec 2.0 is not yet supported in PyTorch 2.0 and needs some modification to ensure compatibility when running torch.compile
. The same error occurred when fine-tuning for automatic speech recognition.
cc @sanchit-gandhi
Hi @w11wo, thanks for raising this issue!
Please note that whilst we aim to support a wide variety of use cases with our examples, torch_compile
is an experimental flag and not one we guarantee will work for for all of our models as the support is progressively rolled in in PyTorch.
Hi @amyeroberts, no worries and thanks for the heads up. Looking forward to seeing wav2vec 2.0 supported. Cheers.
Hey @w11wo! Sorry for the late reply here and thanks for the detailed issue description! I had a quick look, and the issue seems to reside with the _compute_mask_indices
function:
https://github.com/huggingface/transformers/blob/4baa34c18f18274fe028ad5a5511ea3fba9eeece/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L132
The function is both dynamic and in NumPy - we'd need to make the function static (fixed shapes) for it to be compatible with torch compile. I sadly won't have time to look into this myself, but feel free to open a PR if you want to take a stab at updating this!
In the meantime, you can set SpecAug to 0 to avoid calling this dynamic function - you'll loose regularisation in the feature encoder outputs, but you should be able to torch compile the model. To do this, you simply need to set apply_spec_augment
to False in the config: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/54074b1c16f4de6a5ad59affb4caa8f2ea03a119/config.json#L4
cc @hollance
Hey @w11wo - any luck here? Did it work with specaug set to 0?
Hi @sanchit-gandhi, unfortunately I haven't been able to test it out without SpecAugment, since my use case requires it to be used. I will try and test it out when I can.
Hey @w11wo - sure, sounds good! The offer still stands for opening a PR to fix this if you feel like having a go at re-working the SpecAug logic in the modelling file, think this could make for a nice PR :)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Extending the offer of opening a PR to fix the SpecAug logic in the modelling file to the community! Would be a nice PR addition to re-work the SpecAug function so that it's compatible with torch compile (note that torch.compile
is not guaranteed for the transformers library, but is a nice feature if it can be done without backwards breaking changes)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.