stanford_alpaca
stanford_alpaca copied to clipboard
Code and documentation to train Stanford's Alpaca models, and generate the data.
Hello, actually we are trying to locate log files for the model, but could not find them, checked with the default locations such as the User folder and Windows Folder,...
I noticed that all the data can be classified:  Could you share the label for each data? Thanks a lot!
In the process of fine-tuning, how to determine the basis of fine-tuen is enough? How do you know if it's appropriate to stop? I am very concerned about how to...
How to finetune using the customizer data?
Hi, Thank you for your great work. I am interestingly taking a look at the codes. By the way, I have a question for you. According to README.md, it is...
pytorch_model-00001-of-00002.bin 13G pytorch_model-00002-of-00002.bin 13G
I encountered the following problem when finetuning the model with the guidance of README.md. ## Here is the detailed error: (alpaca) root@iZwz95ccn6prjs8ioz8bbdZ:/data/stanford_alpaca# sh order.sh WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable...
Has anyone reproduced this result? Why is the alpaca obtained after combining weight_diff much better than the alpaca obtained by my own reproduction? Hope someone can answer me, thanks!
I only have one A100?how to set params to train Llama-alpaca
All of the instructions describe the operations of fine-tuning only on single machine with single GPU or multi GPU, How can I run fine tuning on multi machine?