llama-recipes
llama-recipes copied to clipboard
Support for AMD GPUs in the Llama Recipes notebook quickstart
🚀 The feature, motivation and pitch
So I'm using an RX 7900 XTX with PyTorch 2.2.0.dev20231005+rocm5.7. From the error message below, it seems that it's a problem with bitsandbytes not supporting AMD GPUs (yet). I would like to ask for a version of the Quick Start guide that would support AMD GPUs as well, not just Nvidia GPUs.
I get the following error when I try to run model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", load_in_8bit=True, device_map='auto',torch_dtype=torch.float16)
:
Traceback (most recent call last):
File "/home/gabriel/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
return importlib.import_module("." + module_name, self.name)
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "
Alternatives
No response
Additional context
No response
@PatchouliPatch we are looking into AMD GPUs/ will keep you posted, however the bits&bytes issue with AMD you might help to open an issue on their repo.
We have tried this script on AMD GPUs and it works for LoRa and full fine tuning. We have not tried bits-n-bytes.
We have tried this script on AMD GPUs and it works for LoRa and full fine tuning. We have not tried bits-n-bytes.
can u share us your flow/script on how to fix the bit&bytes issue?
thanks