transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Vicuna 13B forward method is very slow in FSDP mode.

Open yurkoff-mv opened this issue 1 year ago • 4 comments

System Info

  • transformers version: 4.28.0.dev0
  • Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
  • Python version: 3.8.10
  • Huggingface_hub version: 0.13.3
  • Safetensors version: not installed
  • PyTorch version (GPU?): 1.13.1+cu117 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@sgugger, @ArthurZucke, @younesbelkada

Information

  • [X] The official example scripts
  • [ ] My own modified scripts

Tasks

  • [ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [ ] My own task or dataset (give details below)

Reproduction

from functools import partial

import torch
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy

from transformers import LlamaTokenizer, LlamaForCausalLM
from transformers.models.llama.modeling_llama import LlamaDecoderLayer


torch.distributed.init_process_group("nccl",
                                             rank=WORLD_RANK,
                                             world_size=WORLD_SIZE,
                                             )
llama_auto_wrap_policy = partial(transformer_auto_wrap_policy,
                                 transformer_layer_cls={
                                     LlamaDecoderLayer,
                                 },
                                 )

tokenizer = LlamaTokenizer.from_pretrained(model_dir)
model = LlamaForCausalLM.from_pretrained(model_dir,
                                                      torch_dtype=torch.float16,
                                                      low_cpu_mem_usage=True)

model = FSDP(model,
                         auto_wrap_policy=llama_auto_wrap_policy,
                         device_id=torch.cuda.current_device(),
                          # sharding_strategy=sharding_strategy,
                          )
inputs = tokenizer(['Who is Dalai?'])
logits = model.forward(inputs).logits[:, -1, :]

The execution time of the forward method is a more than a minute.

Expected behavior

The execution time of the forward method is a few seconds.

yurkoff-mv avatar Apr 10 '23 11:04 yurkoff-mv

I also want to attach a link to the discussion topic of the generate method in FSDP mode.

yurkoff-mv avatar Apr 10 '23 11:04 yurkoff-mv

cc @pacman100

sgugger avatar Apr 10 '23 12:04 sgugger

I forgot to mention that I'm running the model on two RTX 3090 GPUs.

yurkoff-mv avatar Apr 10 '23 19:04 yurkoff-mv

Here is a working example you can try:

from functools import partial

import torch
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy

from transformers import LlamaTokenizer, LlamaForCausalLM
from transformers.models.llama.modeling_llama import LlamaDecoderLayer

model_dir = "<insert your path to model here>"

import os
from time import perf_counter

local_rank = int(os.environ["LOCAL_RANK"])
local_world_size = int(os.environ["LOCAL_WORLD_SIZE"])

torch.cuda.set_device(torch.device(f"cuda:{local_rank}"))

torch.distributed.init_process_group(
    "nccl",
    rank=local_rank,
    world_size=local_world_size,
)
llama_auto_wrap_policy = partial(
    transformer_auto_wrap_policy,
    transformer_layer_cls={
        LlamaDecoderLayer,
    },
)

print(torch.cuda.current_device())

tokenizer = LlamaTokenizer.from_pretrained(model_dir)
model = LlamaForCausalLM.from_pretrained(model_dir, torch_dtype=torch.float16, low_cpu_mem_usage=True)

model = FSDP(
    model,
    auto_wrap_policy=llama_auto_wrap_policy,
    device_id=torch.device(f"cuda:{local_rank}"),
    # sharding_strategy=sharding_strategy,
)
inputs = tokenizer(["Who is Dalai?"], return_tensors="pt")

print(inputs)
t1_start = perf_counter()
logits = model(**inputs).logits[:, -1, :]
t1_stop = perf_counter()
print("forward time:", t1_stop - t1_start)
print(torch.cuda.max_memory_allocated() / 1e9)

Run with torchrun --nproc_per_node=2 --master_port=56718 run_forward.py.

For me this prints a forward runtime of ~0.8 sec on 2 A100 gpus and a peak GPU memory of ~14.5 GB (using llama-13b, current transformers main branch).

maxidl avatar Apr 21 '23 20:04 maxidl

I think that you have such good performance because the model is placed on one GPU.

yurkoff-mv avatar May 15 '23 15:05 yurkoff-mv

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Jun 09 '23 15:06 github-actions[bot]