Liger-Kernel
Liger-Kernel copied to clipboard
Unable to use FLCE with FSDP+PEFT+embeddings layers
🐛 Describe the bug
when trying to train both LoRA layers on the base model and also set modules_to_save on the lora config which makes the embeddings layers trainable (my assumption is it also applies a LoRA linear adapter to the nn.Embeddings), results in the error:
[rank0]: File "/home/ubuntu/miniconda3/envs/pytorch/lib/python3.11/site-packages/liger_kernel/transformers/fused_linear_cross_entropy.py", line 13, in forward
[rank0]: return LigerFusedLinearCrossEntropyFunction.apply(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/ubuntu/miniconda3/envs/pytorch/lib/python3.11/site-packages/torch/autograd/function.py", line 598, in apply
[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
Reproduce
base_model: NousResearch/Meta-Llama-3.1-8B
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
# liger_rope: true
# liger_rms_norm: true
# liger_swiglu: true
liger_fused_linear_cross_entropy: true
datasets:
- path: tatsu-lab/alpaca
type: alpaca
dataset_prepared_path:
val_set_size: 0.005
output_dir: ./outputs/lora-out
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: l8b-peft-dora-rslora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
Versions
No response
Thank you @winglian for raising the issue! Looks like FLCE fails at torch autograd when we make embedding + peft layer trainable together. We are happy to take a look! Would be nice if you can provide complete reproducible script. Thanks!
Thank you @winglian for raising the issue! Looks like FLCE fails at torch autograd when we make embedding + peft layer trainable together. We are happy to take a look! Would be nice if you can provide complete reproducible script. Thanks!
Can we create a list of what is working and what is not, with the corresponding dates, and keep it updated? For example, the DPO training labels issue, the PEFT+ embedding training issue, etc. Also, indicate which ones are going to be fixed and which ones are not.
Any progress out there? Have the same problem with torch-2.5.0 and liger-0.3.1 when FSDP-training involves lm_head layer
[rank4]: File "/usr/local/lib/python3.10/dist-packages/liger_kernel/transformers/fused_linear_cross_entropy.py", line 13, in forward
[rank4]: return LigerFusedLinearCrossEntropyFunction.apply(
[rank4]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 575, in apply
[rank4]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank4]: RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
Seems this problem arises when
fsdp_use_orig_params: false
fsdp_use_orig_params: If True, allows non-uniform requires_grad during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be True when creating an optimizer before preparing/wrapping the model with FSDP.
When it's True , there different problem
[rank4]: File "/usr/local/lib/python3.10/dist-packages/liger_kernel/ops/fused_linear_cross_entropy.py", line 221, in forward
[rank4]: loss, grad_input, grad_weight, grad_bias = fused_linear_cross_entropy_forward(
[rank4]: File "/usr/local/lib/python3.10/dist-packages/liger_kernel/ops/fused_linear_cross_entropy.py", line 63, in fused_linear_cross_entropy_forward
[rank4]: logits_chunk = _input_chunk @ weight.t() # chunk_size x V
[rank4]: RuntimeError: size mismatch, got input (2), mat (2x4096), vec (65667072)
FIY @winglian