stanford_alpaca
stanford_alpaca copied to clipboard
Code and documentation to train Stanford's Alpaca models, and generate the data.
Towards the end of training. I see following exception thrown ``` 100%|██████████| 203/203 [08:00
When I ran training process for Llama2-7b-hf, I encountered the following error: Keyword arguments {'add_special_tokens': False} not recognized. May I know how to solve this problem? Thank you!
Dear Developers, I'm delighted to inform you that the documentation update for the Python scripts has been successfully completed. The updated documentation provides clear explanations of function parameters, return types,...
openai>=1.0 is incompatible when running weight_diff.py openai==0.28.0 is ok. This solved #303 @lxuechen
When I use 4 A100 40G to train the alpaca, I encountered an oom error during training. this is my training arguments: ``` #!/bin/bash module load compilers/cuda/11.8 compilers/gcc/9.3.0 cudnn/8.4.0.27_cuda11.x anaconda...
Can we use A100 40G to finetune llama-7B? Is there anyone try that?
when running the below commend: python weight_diff.py recover --path_raw /models/Llama-2-7b-hf --path_diff /models/alpaca-7b-wdiff --path_tuned ./llama-alpaca-7b-hf it shows the error: RuntimeError: The size of tensor a (32001) must match the size of...
I am reading the code in ```generate_instruction.py```. If you see the doc of Google's rouge_scorer, the input order of ```def _score_lcs(target_tokens, prediction_tokens):``` is target_tokens in the first. at https://github.com/google-research/google-research/blob/master/rouge/rouge_scorer.py#L186 In...
I am using a single GPU(A10) to run Bloom-560m model fine-tune, error, how to solve? I found similar problems in other projects, but I didn't know how to solve the...