metaseq icon indicating copy to clipboard operation
metaseq copied to clipboard

`hub_utils.py` assumes a different sharding convention

Open EIFY opened this issue 2 years ago • 3 comments

🐛 Bug

hub_utils.py of baa4e6d840042404e51a60efbe9d65ad62c80fca (current main) assumes a different sharding convention

To Reproduce

  1. Set up everything as before:
$ ls /home/jason_chou/redspot_home/66b/
dict.txt         gpt2-vocab.json          reshard-model_part-1.pt  reshard-model_part-3.pt  reshard-model_part-5.pt  reshard-model_part-7.pt
gpt2-merges.txt  reshard-model_part-0.pt  reshard-model_part-2.pt  reshard-model_part-4.pt  reshard-model_part-6.pt  restored.pt
$
$ cat metaseq/service/constants.py
# Copyright (c) Meta Platforms, Inc. and affiliates. All Rights Reserved.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import os

MAX_SEQ_LEN = 2048
BATCH_SIZE = 2048  # silly high bc we dynamically batch by MAX_BATCH_TOKENS
MAX_BATCH_TOKENS = 3072
DEFAULT_PORT = 6010
MODEL_PARALLEL = 8
TOTAL_WORLD_SIZE = 8
MAX_BEAM = 16

try:
    # internal logic denoting where checkpoints are in meta infrastructure
    from metaseq_internal.constants import CHECKPOINT_FOLDER
except ImportError:
    # CHECKPOINT_FOLDER should point to a shared drive (e.g. NFS) where the
    # checkpoints from S3 are stored. As an example:
    # CHECKPOINT_FOLDER = "/example/175B/reshard_no_os"
    # $ ls /example/175B/reshard_no_os
    # reshard-model_part-0.pt
    # reshard-model_part-1.pt
    # reshard-model_part-2.pt
    # reshard-model_part-3.pt
    # reshard-model_part-4.pt
    # reshard-model_part-5.pt
    # reshard-model_part-6.pt
    # reshard-model_part-7.pt
    CHECKPOINT_FOLDER = "/home/jason_chou/redspot_home/66b/"

# tokenizer files
BPE_MERGES = os.path.join(CHECKPOINT_FOLDER, "gpt2-merges.txt")
BPE_VOCAB = os.path.join(CHECKPOINT_FOLDER, "gpt2-vocab.json")
MODEL_FILE = os.path.join(CHECKPOINT_FOLDER, "reshard.pt")


LAUNCH_ARGS = [
    f"--model-parallel-size {MODEL_PARALLEL}",
    f"--distributed-world-size {TOTAL_WORLD_SIZE}",
    # If using FSDP shards, replace ddp-backend and add use-sharded-state
    "--ddp-backend fully_sharded",
    "--use-sharded-state",
    "--task language_modeling",
    f"--bpe-merges {BPE_MERGES}",
    f"--bpe-vocab {BPE_VOCAB}",
    "--bpe hf_byte_bpe",
    f"--merges-filename {BPE_MERGES}",  # TODO(susanz): hack for getting interactive_hosted working on public repo
    f"--vocab-filename {BPE_VOCAB}",  # TODO(susanz): hack for getting interactive_hosted working on public repo
    f"--path {MODEL_FILE}",
    "--beam 1",
    "--checkpoint-shard-count 1",
    f"--batch-size {BATCH_SIZE}",
    f"--buffer-size {BATCH_SIZE * MAX_SEQ_LEN}",
    f"--max-tokens {BATCH_SIZE * MAX_SEQ_LEN}",
    "/tmp",  # required "data" argument.
]

# Optional arg overrides which influence model loading during inference
INFERENCE_ARG_OVERRIDES = {}
  1. Run metaseq-api-local
  2. See error
(...)
FileNotFoundError: [Errno 2] No such file or directory: '/home/jason_chou/redspot_home/66b/reshard.pt'

Taking a closer look at the current hub_utils.py https://github.com/facebookresearch/metaseq/blob/baa4e6d840042404e51a60efbe9d65ad62c80fca/metaseq/hub_utils.py#L121-L125 It seems to assume a different sharding convention from the one in the constants.py comment and assumed by the OPT download path. Indeed, changing the above to the following gets around it:

        if len(sharded_files) > 0 and "reshard" in sharded_files[0]:
            # We are loading a sharded checkpoint
            suffix = f"-model_part-{r}"
        else:
            suffix += ""

With that, metaseq-api-local works.

Expected behavior

metaseq-api-local just works...?

Environment

  • metaseq Version: baa4e6d840042404e51a60efbe9d65ad62c80fca (current main)
  • PyTorch Version: 1.12.1+cu113
  • OS: Ubuntu 18.04.6 LTS
  • How you installed metaseq: pip
  • Build command you used (if compiling from source): N.A.
  • Python version: 3.10
  • CUDA/cuDNN version: CUDA 12.0
  • GPU models and configuration: 8 x V100 SXM2 32 GB

EIFY avatar Dec 16 '22 00:12 EIFY

@punitkoura Could you take a look?

EIFY avatar Jan 03 '23 19:01 EIFY

@EIFY I think this issue might be related to the one #407 you encountered before, where the distributed process groups were not initialized correctly. Have you tried removing "--distributed-port 13000" from LAUNCH_ARGS?

tangbinh avatar Jan 05 '23 19:01 tangbinh

@EIFY I think this issue might be related to the one #407 you encountered before, where the distributed process groups were not initialized correctly. Have you tried removing "--distributed-port 13000" from LAUNCH_ARGS?

@tangbinh You are right! Sorry I didn't realize that because the error message was different. We still have the sharding convention issue though and I just found out that L123 has to be

            suffix = f"-model_part-{r}"

instead of

            suffix += f"-model_part-{r}"

otherwise, we get

$ metaseq-api-local
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 0
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 6
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 7
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 1
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 4
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 2
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 3
2023-01-06 23:56:41 | INFO | metaseq.distributed.utils | initialized host i-0e514830a5eba94e8 as rank 5
> initializing tensor model parallel with size 8
> initializing pipeline model parallel with size 1
> initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 2719 and data parallel seed: 1
2023-01-06 23:56:45 | INFO | metaseq.hub_utils | loading model(s) from /home/jason_chou/redspot_home/66b/reshard.pt
Traceback (most recent call last):
  File "/home/jason_chou/.conda/envs/user/bin/metaseq-api-local", line 8, in <module>
    sys.exit(cli_main())
  File "/home/default_user/metaseq/metaseq/cli/interactive_hosted.py", line 389, in cli_main
    distributed_utils.call_main(cfg, worker_main, namespace_args=args)
  File "/home/default_user/metaseq/metaseq/distributed/utils.py", line 287, in call_main
    return _spawn_helper(main, cfg, kwargs)
  File "/home/default_user/metaseq/metaseq/distributed/utils.py", line 265, in _spawn_helper
    retval = distributed_main(-1, main, cfg, kwargs)
  File "/home/default_user/metaseq/metaseq/distributed/utils.py", line 227, in distributed_main
    retval = main(cfg, **kwargs)
  File "/home/default_user/metaseq/metaseq/cli/interactive_hosted.py", line 188, in worker_main
    models = generator.load_model()  # noqa: F841
  File "/home/default_user/metaseq/metaseq/hub_utils.py", line 167, in load_model
    models, _model_args, _task = _load_checkpoint()
  File "/home/default_user/metaseq/metaseq/hub_utils.py", line 152, in _load_checkpoint
    return checkpoint_utils.load_model_ensemble_and_task(
  File "/home/default_user/metaseq/metaseq/checkpoint_utils.py", line 489, in load_model_ensemble_and_task
    state = load_checkpoint_to_cpu(filename, arg_overrides)
  File "/home/default_user/metaseq/metaseq/checkpoint_utils.py", line 423, in load_checkpoint_to_cpu
    paths_to_load = get_paths_to_load(path, suffix="shard")
  File "/home/default_user/metaseq/metaseq/checkpoint_utils.py", line 377, in get_paths_to_load
    if not _is_checkpoint_sharded(checkpoint_files):
  File "/home/default_user/metaseq/metaseq/checkpoint_utils.py", line 319, in _is_checkpoint_sharded
    sd = torch_load_cpu(checkpoint_files[0])
  File "/home/default_user/metaseq/metaseq/file_io/__init__.py", line 57, in torch_load_cpu
    state = torch.load(path, map_location=torch.device("cpu"))
  File "/home/default_user/.conda/envs/user/lib/python3.10/site-packages/torch/serialization.py", line 699, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/default_user/.conda/envs/user/lib/python3.10/site-packages/torch/serialization.py", line 230, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/default_user/.conda/envs/user/lib/python3.10/site-packages/torch/serialization.py", line 211, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/jason_chou/redspot_home/66b/reshard-model_part-0-model_part-0.pt'

I have edited the title and comment accordingly.

EIFY avatar Jan 07 '23 00:01 EIFY

@EIFY I think you needed to set suffix = f"-model_part-{r}" previously because the distributed process groups were not initialized correctly. The model_part string is supposed to be added to the suffix during this process, so you don't need to explicitly add it yourself. This explained why you got '/home/jason_chou/redspot_home/66b/reshard-model_part-0-model_part-0.pt' after fixing the distributed initialization.

I'm closing the issue now but please let us know if you need further help.

tangbinh avatar Apr 05 '23 18:04 tangbinh