stanford_alpaca icon indicating copy to clipboard operation
stanford_alpaca copied to clipboard

Code and documentation to train Stanford's Alpaca models, and generate the data.

Results 224 stanford_alpaca issues
Sort by recently updated
recently updated
newest added

https://github.com/pointnetwork/point-alpaca

## 1. module transformers has no attribute LLaMATokenizer or 'missing key 'llama'. First, install the SentencePiece then install transformers from huggingface git repo. i.e., pip install sentencepiece, pip install git+https://github.com/huggingface/transformers.git...

I wang to follow the guide below. > Given Hugging Face hasn't officially supported the LLaMA models, we fine-tuned LLaMA with Hugging Face's transformers library by installing it from a...

This minor PR is to add a notes to address training slowdown issue mentioned in https://github.com/tatsu-lab/stanford_alpaca/issues/32

A [member](https://github.com/huggingface/transformers/pull/21955#issuecomment-1471973195) of this PR claims that the PR is merged but GitHub is showing wrong status for the merge, But the blog says that a specific commit was used...

I fixed three spelling mistakes in the prompt.

Is the following installation method correct? pip install git+https://github.com/zphang/transformers.git@llama_push Each version is as follows: numpy==1.24.2 rouge-score==0.1.2 fire==0.5.0 openai==0.27.2 sentencepiece==0.1.97 wandb==0.14.0

Is it possible to finetune the 7B model using 8*3090? I had set: --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ but still got OOM: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to...