0xbitches
0xbitches
@ItsLogic Right now I am forcing device_map to use only the GPU, ie adding `device_map={'': 0}` to `PeftModel.from_pretrained`, which worked. Looks like the issue is that Peft's load will auto...
@collant were the screenshots from a custom gradio? looks neat. Sadly I agree that 7b (and 13b) are nowhere near sufficient for anything too serious.
@baleksey Trained a 13B lora with one epoch as well, my loss was around 0.75 lowest. Text gen results don't feel too different for me than 7B though
Can report the same as @ItsLogic e1 and e3 feels roughly the same, probably because the loss are both ~0.78. Somewhat related, when I was trying the model out with...
Is there a 30B-4bit lora out there? I think I read somewhere that finetuning in 4bit might not be supported?
The paper mentioned they used CLIP to handle text prompts: > We represent points and boxes by positional encodings [95] summed with learned embeddings for each prompt type and free-form...
> Right, I ran it in the base "Grounded-Segment-Anything" directory. After doing so, I noticed the GroundingDino/groundingdino now has a "_C.cp310-win_amd64.pyd" file which was previously not there. Here's how my...
@Andy1621 can you specify what you did?