V. K. Cody Bumgardner
V. K. Cody Bumgardner
@haotian-liu What is the process to convert float32 to float16? I have a 13B fine-tuned model that is 50G.
@anonymous-atom Here is an example dataset: https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/detail_23k.json You just need to take your data and make it conform to this set. You can then use the build scripts, substituting your...
Using multiple Lora weights independently and merged would be a very important feature.
Information on fine tuning would be great.
I have access to a 4 x A100 /w 80G if you want me to test something.
> > @hu-po > > hi, nice work for sharing finetune script , is "FragmentDataset" the datasets that released by official datasets [https://segment-anything.com/dataset/index.html](url) > > No, it's a custom dataset...
> Hi Vishaal, thanks for sharing the code. Indeed the fork has changed quite a fair bit. Seems like the mm-projector is removed, and the pretrained model as well. I...
With the same config I get OOM while training on 5 x nodes with 8 x H100 each. Any configs other than the example 4-bit qlora I have tried results...
The only thing I can get working is the provided 4-bit qlora config, anything else lora, qlora (8 bit, etc), or fp16 results in OOM or some other error. I...
@manthey I think I have finally poked around enough to see what is going on. I have a test suite that replicates in part how the Girder (file -> file...