Brando Miranda
Brando Miranda
I get a similar issue with falcon but not on their official colab: ``` ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this...
Same issue for me: ``` If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so',...
this error also happens in my a100 (of course)
official peft issue I opened for my issue: https://github.com/huggingface/peft/issues/685
can we use the old models or how does this work? We just load the old model with the new tokenizer? ----- Brando Miranda Ph.D. Student Computer Science, Stanford University...
Got it. Thanks! I will assume v1 open llama is basically unusable for code gen (what I want) and use only v2. Thanks! ----- Brando Miranda Ph.D. Student Computer Science,...
And also full proof term would be awesome. Would the proof terms of the holes be also possible to extract/request?
@ejgallego how is this going? :) Any help maybe needed?
seems that I just need to wait for the official HF permission not only metas?
https://github.com/IBM/powerai/issues/268