zyb8543d

Results 95 comments of zyb8543d

Modify code `WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2", load_in_8bit=True, device_map="auto" ` as `WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2", load_in_8bit=True, device_map={"":0})`, but get follow error: ``` Traceback (most recent call last): File "finetune.py", line 172, in whisper_finetune(traindir,devdir,outdir) File "finetune.py", line 167,...

set device_map={"":0} can solve it , but met new error :`RuntimeError: mat1 and mat2 shapes cannot be multiplied (12000x1 and 2x1280)`

whether someone has run the whisper funetuning recipe successfully?

> Hi @v-yunbin @nd7141 > Can you share more details about your environment, what are your`peft`, `accelerate` & `bitsandbytes` versions? ``` peft 0.2.0 accelerate 0.18.0 bitsandbytes 0.37.2 ```

I follow your steps and install all from source but it still not work for me. ``` accelerate ==0.18.0.dev0 transformers==4.28.0.dev0 peft== 0.3.0.dev0 ```

> ```shell > pip install git+https://github.com/huggingface/transformers.git > ``` @younesbelkada it still not work. my codes is as follows: ``` os.environ["CUDA_VISIBLE_DEVICES"] = "0" dummy_accelerator = Accelerator() current_device = dummy_accelerator.process_index device_map={"":current_device} do_lower_case...

> In the traceback I can see > > ```shell > File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 171, in forward > outputs = self.parallel_apply(replicas, inputs, kwargs) > ``` > > Which means that...

uninstall peft、accelerate 、transfomrers ,reinstall all them, still get same errors, does this have anything to do with bitsandbytes version(my version is 0.37.2)?

@younesbelkada set load_in_8bit=False, it works, why?

> Hm never seen that. What OS do you use? Have you tried a fresh venv? `centos7.9 python3.8.3` yes,use `pip install git+https://github.com/suno-ai/bark.git --user` to install,The above error disappeared, but get...