danucalovj
danucalovj
Would recommend Titanium Web Proxy for that. This repository doesn't seem to be updated often.
Possibly related to the newest peft release. I had a similar issue (see other issues posted here around peft breaking things left and right). Fix: https://github.com/tloen/alpaca-lora/issues/293: pip uninstall peft -y...
You're getting a 401 error on the huggingface model and dataset. Try downloading the model locally into your working directory using: git lfs install git clone https://huggingface.co/decapoda-research/llama-7b-hf And also the...
Assuming you're using GPU (cuda), trying changing/adding the value of device_map in the following functions of generate.py: ```if device == "cuda": model = LlamaForCausalLM.from_pretrained( ... device_map={'':0}, ... ) model =...
See: https://github.com/tloen/alpaca-lora/issues/293
python generate.py \ --load_8bit \ --base_model 'decapoda-research/llama-7b-hf' \ --lora_weights './lora-alpaca'
Same here :(
@kartset try this: Change this: `model.compile(loss='binary_crossentropy', optimizer='adam', metrics = ['accuracy'])` To this: `model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics = ['accuracy'])` Not sure of results yet, running epochs right now, at least got it...
Nevermind. That didn't work.