Maxwell
Maxwell
> > > trying to generate with 4 rtx 3090: > > > ``` > > > fairseq-generate \ > > > bin \ > > > --batch-size 1 \...
> Nope, I tried setting `fontLigatures` to `true` but it didn't help WARNING: This is not a solution, it's a hack, however works If you REALLY want to use your...
This is definitly caused by the version of `transformers`. Here are my versions, which is clearly higher: ``` accelerate 0.20.0.dev0 bitsandbytes 0.39.0 transformers 4.30.0.dev0 peft 0.4.0.dev0 ``` Please try upgrading...
🥰 Your "workaround" is a very good fix, it is clearly working and should be merged ASAP. I can comfirm its working. I was trying to do a predict using...
I dont know if this is the RIGHT way, but this simple modification at [L275](https://github.com/tloen/alpaca-lora/blob/8bb8579e403dc78e37fe81ffbb253c413007323f/finetune.py#L275) produces a `adapter_model.bin` with the right size: ``` diff - model.save_pretrained(output_dir) + model.save_pretrained(output_dir, state_dict=old_state_dict()) ```
According to Nvidia, v100 DOES NOT support int4 data type. 
Maybe you should check the BNB debug messages. If bnb is loading `libbitsandbytes_cpu.so` instead of `libbitsandbytes_cuda117.so`, then it could only run on CPU ram. Here is the output when I...
For everyone's reference, here is my solution to load `alpaca` dataset from json files, in an experiment to make modifications to it: ``` python elif "/" in args.dataset: print(f'Using local...
Same here, but the trained model seems fine? (sort of)