qlora
qlora copied to clipboard
Using bos_token, but it is not set yet.
Hello,
I'm trying this on 2x40gb a100. Not sure if I should be ignoring this message or why I'm getting it. Also is ~16hrs good on this setup or should I expect faster?
5%|▍ | 499/10000 [58:36<16:14:29, 6.15s/it] Using bos_token, but it is not set yet.
Per the instructions on multi GPU, it says to set something like this:
device_map = "auto"
max_memory = {i: '46000MB' for i in range(torch.cuda.device_count())}
In qlora.py I see the following:
max_memory = f'{args.max_memory_MB}MB'
max_memory = {i: max_memory for i in range(n_gpus)}
device_map = "auto"
Should I change this to:
device_map = "auto"
max_memory = {i: '40000MB' for i in range(torch.cuda.device_count())}
Thanks
For the bos_token setting, you could refer to qlora.py line 671