DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] Wrong output for batch input for opt model inference.

Open yingapple opened this issue 2 years ago • 4 comments

Describe the bug I got a question when I use batch inference to opt model. Just the first output of the batch results is right. The last output are all wrong. could you please tell me what's going wrong? The result is just like:

image

@RezaYazdaniAminabadi May I know your idea about this question?

yingapple avatar Sep 01 '22 08:09 yingapple

any update for this question?

yingapple avatar Sep 02 '22 06:09 yingapple

@yingapple can you please confirm this is only related to OPT model or you have observed similar issue with other models as well? Could you also please provide us with the code you are running? thanks.

arashb avatar Sep 02 '22 15:09 arashb

@yingapple can you please confirm this is only related to OPT model or you have observed similar issue with other models as well? Could you also please provide us with the code you are running? thanks.

The code I am running is:

import argparse
import logging
import time
import numpy as np
import torch
from transformers.models.gpt2.modeling_gpt2 import GPT2Block as gpt2_transformer
from transformers.models.opt.modeling_opt import OPTDecoderLayer as opt_transformer
from transformers import (
    CTRLLMHeadModel,
    CTRLTokenizer,
    GPT2LMHeadModel,
    GPT2Tokenizer,
    GPTNeoModel,
    OpenAIGPTLMHeadModel,
    OpenAIGPTTokenizer,
    TransfoXLLMHeadModel,
    TransfoXLTokenizer,
    XLMTokenizer,
    XLMWithLMHeadModel,
    XLNetLMHeadModel,
    XLNetTokenizer,
    OPTForCausalLM
)
import os

logging.basicConfig(
    format="%(asctime)s - %(levelname)s - %(name)s -   %(message)s",
    datefmt="%m/%d/%Y %H:%M:%S",
    level=logging.INFO,
)

#os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

logger = logging.getLogger(__name__)

MAX_LENGTH = int(10000)  # Hardcoded max length to avoid infinite loop



local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))

MODEL_CLASSES = {
    "gpt2": (GPT2LMHeadModel, GPT2Tokenizer),
    "gptneo": (GPTNeoModel, GPT2Tokenizer),
    "ctrl": (CTRLLMHeadModel, CTRLTokenizer),
    "openai-gpt": (OpenAIGPTLMHeadModel, OpenAIGPTTokenizer),
    "xlnet": (XLNetLMHeadModel, XLNetTokenizer),
    "transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer),
    "xlm": (XLMWithLMHeadModel, XLMTokenizer),
    "opt": (OPTForCausalLM, GPT2Tokenizer)


PREFIX = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. """


lrc_prompt = PREFIX

def set_seed(args):
    np.random.seed(args.seed)
    torch.manual_seed(args.seed)
    if args.n_gpu > 0:
        torch.cuda.manual_seed_all(args.seed)


def prepare_ctrl_input(args, _, tokenizer, prompt_text):
    if args.temperature > 0.7:
        logger.info("CTRL typically works better with lower temperatures (and lower top_k).")

    encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False)
    if not any(encoded_prompt[0] == x for x in tokenizer.control_codes.values()):
        logger.info("WARNING! You are not starting your generation from a control code so you won't get good results")
    return prompt_text

def prepare_xlm_input(args, model, tokenizer, prompt_text):
    # kwargs = {"language": None, "mask_token_id": None}

    # Set the language
    use_lang_emb = hasattr(model.config, "use_lang_emb") and model.config.use_lang_emb
    if hasattr(model.config, "lang2id") and use_lang_emb:
        available_languages = model.config.lang2id.keys()
        if args.xlm_language in available_languages:
            language = args.xlm_language
        else:
            language = None
            while language not in available_languages:
                language = input("Using XLM. Select language in " + str(list(available_languages)) + " >>> ")

        model.config.lang_id = model.config.lang2id[language]
        # kwargs["language"] = tokenizer.lang2id[language]

    return prompt_text


def prepare_xlnet_input(args, _, tokenizer, prompt_text):
    prefix = args.prefix if args.prefix else args.padding_text if args.padding_text else PREFIX
    prompt_text = prefix + prompt_text
    return prompt_text


def prepare_transfoxl_input(args, _, tokenizer, prompt_text):
    prefix = args.prefix if args.prefix else args.padding_text if args.padding_text else PREFIX
    prompt_text = prefix + prompt_text
    return prompt_text


PREPROCESSING_FUNCTIONS = {
    "ctrl": prepare_ctrl_input,
    "xlm": prepare_xlm_input,
    "xlnet": prepare_xlnet_input,
    "transfo-xl": prepare_transfoxl_input,
}


def adjust_length_to_model(length, max_sequence_length):
    if length < 0 and max_sequence_length > 0:
        length = max_sequence_length
    elif 0 < max_sequence_length < length:
        length = max_sequence_length  # No generation bigger than model size
    elif length < 0:
        length = MAX_LENGTH  # avoid infinite loop
    return length


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--model_type",
        default=None,
        type=str,
        required=True,
        help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()),
    )
    parser.add_argument(
        "--model_name_or_path",
        default=None,
        type=str,
        required=True,
        help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(MODEL_CLASSES.keys()),
    )
    parser.add_argument(
        "--sample_input",
        default=None,
        type=str,
        required=False,
        help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(MODEL_CLASSES.keys()),
    )

    parser.add_argument("--prompt", type=str, default="")
    parser.add_argument("--length", type=int, default=20)
    parser.add_argument("--stop_token", type=str, default=None, help="Token at which text generation is stopped")
    parser.add_argument(
        "--temperature",
        type=float,
        default=1.0,
        help="temperature of 1.0 has no effect, lower tend toward greedy sampling",
    )
    parser.add_argument(
        "--repetition_penalty", type=float, default=1.0, help="primarily useful for CTRL model; in that case, use 1.2"
        )
    parser.add_argument("--k", type=int, default=0)
    parser.add_argument("--p", type=float, default=0.9)

    parser.add_argument("--prefix", type=str, default="", help="Text added prior to input.")
    parser.add_argument("--padding_text", type=str, default="", help="Deprecated, the use of `--prefix` is preferred.")
    parser.add_argument("--xlm_language", type=str, default="", help="Optional language when used with the XLM model.")

    parser.add_argument("--local_rank", type=int, default=0, help="local rank")
    parser.add_argument("--seed", type=int, default=42, help="random seed for initialization")
    parser.add_argument("--no_cuda", action="store_true", help="Avoid using CUDA when available")
    parser.add_argument("--num_return_sequences", type=int, default=1, help="The number of samples to generate.")
    parser.add_argument(
        "--fp16",
        action="store_true",
        help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit",
    )
    parser.add_argument('--ds-inference', action="store_true", help="Use deepspeed")
    args = parser.parse_args()
    args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
    args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count()
    #args.n_gpu = 1 
    logger.warning(
        "device: %s, n_gpu: %s, 16-bits training: %s",
        args.device,
        args.n_gpu,
        args.fp16,
    )
    set_seed(args)
    try:
        args.model_type = args.model_type.lower()
        model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
    except KeyError:
        raise KeyError("the model {} you specified is not supported. You are welcome to add it and open a PR :)")
    tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path, use_fast=False)
    model = model_class.from_pretrained(args.model_name_or_path)
    # model.cuda(torch.cuda.current_device())
    if args.fp16:
        model.half()
    model.cuda(local_rank)
    print("############")
    print(local_rank)
    print("############")

    if args.ds_inference:

        import deepspeed.module_inject as module_inject
        import deepspeed
        injection_policy={gpt2_transformer:
                          module_inject.replace_policy.HFGPT2LayerPolicy,
                          opt_transformer: module_inject.replace_policy.HFOPTLayerPolicy}
        model = deepspeed.init_inference(model,
                                         mp_size=2,
                                         dtype=(torch.half if args.fp16 else torch.float),
                                         # injection_policy=injection_policy,
                                         replace_method='auto',
                                         replace_with_kernel_inject=True)
        model = model.module
    args.length = adjust_length_to_model(args.length, max_sequence_length=model.config.max_position_embeddings)
    logger.info(args)
    if args.sample_input:
        fname = open(args.sample_input, "r", encoding="utf8")
        prompt_text = fname.readlines()
    else:
        prompt_text = (args.prompt if args.prompt else input("Model prompt >>> "),)
    requires_preprocessing = args.model_type in PREPROCESSING_FUNCTIONS.keys()
    eprompt = []
    bak_eprompt = []
    if requires_preprocessing:
        prepare_input = PREPROCESSING_FUNCTIONS.get(args.model_type)
        for input_text in prompt_text:
            preprocessed_prompt_text.append(prepare_input(args, model, tokenizer, prompt_text))

            if model.__class__.__name__ in ["TransfoXLLMHeadModel"]:
                tokenizer_kwargs = {"add_space_before_punct_symbol": True}
            else:
                tokenizer_kwargs = {}
            for ppt in preprocessed_prompt_text:
                eprompt.append(tokenizer.encode(
                    ppt, add_special_tokens=False, return_tensors="pt", **tokenizer_kwargs
                ))
    else:
        prefix = args.prefix if args.prefix else args.padding_text
        for ppt in prompt_text:
            ppt = lrc_prompt
            eprompt.append(tokenizer([ppt] * 4,add_special_tokens=False, return_tensors="pt"))
    latencies = []
    t1 = time.time()
    for encoded_prompt, ppt in zip(eprompt, prompt_text):
        encoded_prompt = encoded_prompt.to(local_rank)
        input_ids = encoded_prompt
        torch.cuda.synchronize()
        t0 = time.time()
        output_sequences = model.generate(
            **input_ids,
            decoder_attention_mask=input_ids["attention_mask"],
            max_length=input_ids["input_ids"].shape[1]+20,
            #max_length=input_ids.shape[1]+20,
            temperature=0.7,
            top_p=0.5,
            top_k=0,
            repetition_penalty=1.0,
            do_sample=False,
            num_return_sequences=1,
            num_beams=1
        )
        torch.cuda.synchronize()
        print(tokenizer.batch_decode(output_sequences, clean_up_tokenization_spaces=True, skip_special_tokens=False)[0])
    t2 = time.time()
    print("#########cost time: ", t2-t1, "\n"
    print(time.time())
    return whole_strs

if __name__ == "__main__":
    generated_sequences = main()

I just modify the example code by adding the opt configuration. And I start the script by the command "deepspeed test_deepspeed.py --model_type opt --model_name_or_path /mnt/opt-13b --ds-inference --sample_input /mnt/noll/input_prompt.txt --num_return_sequences 1 --fp16"

yingapple avatar Sep 03 '22 02:09 yingapple

@yingapple can you please confirm this is only related to OPT model or you have observed similar issue with other models as well? Could you also please provide us with the code you are running? thanks.

and i'll try gpt batch inference today. Thank you for your reply.

After my testing, I found the error just also occurs for gpt2. The first result of the batch is right with the others are same but wrong.

yingapple avatar Sep 03 '22 02:09 yingapple

@yingapple I'm trying to reproduce your issue. The script you provided does not run as it has syntax and other errors. Can you please verify the script you are running for opt and gpt2 models?

molly-smith avatar Oct 28 '22 23:10 molly-smith

@yingapple I'm trying to reproduce your issue. The script you provided does not run as it has syntax and other errors. Can you please verify the script you are running for opt and gpt2 models?

After some update of deepspeed, the script's result is right. but it went wrong at last when my prompt's lenght is bigger than 500. Also I will modify the script asap.

mind-ying avatar Nov 02 '22 15:11 mind-ying