stanford_alpaca icon indicating copy to clipboard operation
stanford_alpaca copied to clipboard

running the project.

Open valiantlynx opened this issue 2 years ago • 6 comments

så i downloaded and installed the requirements. i noticed utils.py is not written in normal python or at least im getting syntax error. when i run the code i get this error

valiantlynx@DESKTOP-3EGT6DL:~/stanford_alpaca$ /usr/bin/python3 /home/valiantlynx/stanford_alpaca/train.py /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " 2023-03-17 01:44:16.830274: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-17 01:44:17.316763: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-03-17 01:44:18.295000: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2023-03-17 01:44:18.295070: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-03-17 01:44:18.295091: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Traceback (most recent call last): File "/home/valiantlynx/stanford_alpaca/train.py", line 25, in <module> import utils File "/home/valiantlynx/stanford_alpaca/utils.py", line 40, in <module> prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]], TypeError: 'type' object is not subscriptable

what i ran was train.py. i diddnt edit anything. it comes from this fuction

`def openai_completion( prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]], decoding_args: OpenAIDecodingArguments, model_name="text-davinci-003", sleep_time=2, batch_size=1, max_instances=sys.maxsize, max_batches=sys.maxsize, return_text=False, **decoding_kwargs, ) -> Union[Union[StrOrOpenAIObject], Sequence[StrOrOpenAIObject], Sequence[Sequence[StrOrOpenAIObject]],]: """Decode with OpenAI API.

Args:
    prompts: A string or a list of strings to complete. If it is a chat model the strings should be formatted
        as explained here: https://github.com/openai/openai-python/blob/main/chatml.md. If it is a chat model
        it can also be a dictionary (or list thereof) as explained here:
        https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
    decoding_args: Decoding arguments.
    model_name: Model name. Can be either in the format of "org/model" or just "model".
    sleep_time: Time to sleep once the rate-limit is hit.
    batch_size: Number of prompts to send in a single request. Only for non chat model.
    max_instances: Maximum number of prompts to decode.
    max_batches: Maximum number of batches to decode. This argument will be deprecated in the future.
    return_text: If True, return text instead of full completion object (which contains things like logprob).
    decoding_kwargs: Additional decoding arguments. Pass in `best_of` and `logit_bias` if you need them.

Returns:
    A completion or a list of completions.
    Depending on return_text, return_openai_object, and decoding_args.n, the completion type can be one of
        - a string (if return_text is True)
        - an openai_object.OpenAIObject object (if return_text is False)
        - a list of objects of the above types (if decoding_args.n > 1)
"""
is_single_prompt = isinstance(prompts, (str, dict))
if is_single_prompt:
    prompts = [prompts]

if max_batches < sys.maxsize:
    logging.warning(
        "`max_batches` will be deprecated in the future, please use `max_instances` instead."
        "Setting `max_instances` to `max_batches * batch_size` for now."
    )
    max_instances = max_batches * batch_size

prompts = prompts[:max_instances]
num_prompts = len(prompts)
prompt_batches = [
    prompts[batch_id * batch_size : (batch_id + 1) * batch_size]
    for batch_id in range(int(math.ceil(num_prompts / batch_size)))
]

completions = []
for batch_id, prompt_batch in tqdm.tqdm(
    enumerate(prompt_batches),
    desc="prompt_batches",
    total=len(prompt_batches),
):
    batch_decoding_args = copy.deepcopy(decoding_args)  # cloning the decoding_args

    while True:
        try:
            shared_kwargs = dict(
                model=model_name,
                **batch_decoding_args.__dict__,
                **decoding_kwargs,
            )
            completion_batch = openai.Completion.create(prompt=prompt_batch, **shared_kwargs)
            choices = completion_batch.choices

            for choice in choices:
                choice["total_tokens"] = completion_batch.usage.total_tokens
            completions.extend(choices)
            break
        except openai.error.OpenAIError as e:
            logging.warning(f"OpenAIError: {e}.")
            if "Please reduce your prompt" in str(e):
                batch_decoding_args.max_tokens = int(batch_decoding_args.max_tokens * 0.8)
                logging.warning(f"Reducing target length to {batch_decoding_args.max_tokens}, Retrying...")
            else:
                logging.warning("Hit request rate limit; retrying...")
                time.sleep(sleep_time)  # Annoying rate limit on requests.

if return_text:
    completions = [completion.text for completion in completions]
if decoding_args.n > 1:
    # make completions a nested list, where each entry is a consecutive decoding_args.n of original entries.
    completions = [completions[i : i + decoding_args.n] for i in range(0, len(completions), decoding_args.n)]
if is_single_prompt:
    # Return non-tuple if only 1 input and 1 generation.
    (completions,) = completions
return completions

`

im not the best at python, but ive not seen this syntax prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]], ) -> Union[Union[StrOrOpenAIObject], Sequence[StrOrOpenAIObject], Sequence[Sequence[StrOrOpenAIObject]],]:

im very new in ml so maybe im doing everything wrong.

valiantlynx avatar Mar 17 '23 00:03 valiantlynx

Upgrade to Python 3.10.

jahkelr avatar Mar 21 '23 01:03 jahkelr

raceback (most recent call last):
  File "train.py", line 25, in <module>
    import utils
  File "/home/lizhaohui/text2sql/stanford_alpaca/utils.py", line 40, in <module>
    prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]],
TypeError: 'type' object is not subscriptable

I have encountered the same error. My python environment is 3.10. python 3.10.10 he550d4f_0_cpython http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge

LZH-0225 avatar Mar 27 '23 07:03 LZH-0225

Traceback (most recent call last): File "train.py", line 25, in import utils File "/home/yewang/stanford_alpaca/utils.py", line 47, in return_text=False, TypeError: 'type' object is not subscriptable

same error , need help ~~~~

clm971910 avatar Mar 29 '23 02:03 clm971910

I tried python-3.10.11 , it worked for me.

s0yabean avatar Apr 25 '23 22:04 s0yabean

3.8.1, TypeError: 'type' object is not subscriptable, same here

JustAHippo avatar Apr 27 '23 21:04 JustAHippo

Duplicate of https://github.com/tatsu-lab/stanford_alpaca/issues/171 problem solved

JustAHippo avatar Apr 27 '23 21:04 JustAHippo