Teerapol Saengsukhiran
Teerapol Saengsukhiran
I cannot use taskweaver on api hosted by oobabooga/text-generation-webui. However, it work with llama cpp model loader on oobabooga/text-generation-webui which I cannot use it well as AMD rocm 5.7 has...
Please consider adding rocm support for amd gpu
please consider supporting amd gpu including consumer grade like gfx1100 (7900xtx)
please consider adding Rocm support for AMD RDNA3 + other amd gpu
Can you provide the code you used for testing In appendix F? (Representation analysis) I want to test it with llama3 and mistral v 0.3
Please consider adding rocm support for amd gpu (consumer 7900xtx and MI series)
### Problem Description rocminfo show ROCk module is NOT loaded, possibly no GPU devices I previously used rocm 6.1 on Ubuntu 24.04.4 (previous version but the number is somehow larger...

I tried to load Lora training adapters from Deepspeed checkpoint: dir: ``` ls Bunny/checkpoints-llama3-8b/bunny-lora-llama3-8b-attempt2/checkpoint-6000 total 696M -rw-r--r-- 1 [email protected] CU 775 Nov 18 11:03 adapter_config.json -rw-r--r-- 1 [email protected] CU 686M...
With continuous_training on bunny VLM, do we still need to specify vision_tower path? If we do point to siglip path, will it use that untrained weight instead of vision_tower that...