etemiz
etemiz
any news on this? my setup: ``` >>> torch.cuda.is_available() True >>> torch.version.cuda >>> torch.cuda.device_count() 3 >>> torch.__version__ '2.3.1+rocm6.0' ```
open substack_scraper.py add time.sleep(5) after essays_data.append( .... )
Tried a lot of things. Still getting that "Failed to create dynamic compiled". Not related to llama-factory. when I do `pip install unsloth==2025.2.14 unsloth_zoo==2025.2.7` I get another error _pickle.UnpicklingError: Weights...
I found it: ``` pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0 pip install marker-pdf ```
Yes I see them in the VRAM.
Patiently waiting for the merge
Getting a very similar error with bitsandbytes 4 bit quant on 2 * A6000 GPU. Same model Llama-4-Scout-17B-16E-Instruct. llm = LLM(model=fn, trust_remote_code=True, quantization="bitsandbytes")
someone did https://x.com/eliebakouch/status/1815773597744451955
can I train 72b with 2*A6000? (2*48GB)