chavinlo
                                            chavinlo
                                        
                                    Lowest I managed to get was 32GB with batch size 1... but that was ages ago
> Same, I was hoping there would be some flag configuration that would allow using a GPU (or multiple GPUs) with less memory Since it based on pytorch you might...
> Confirmed that everything works as expected and issue was with the size of my dataset. How large was it, just wondering
one sec
Hm... I don't think it uses much vram, although I haven't measured it. Could you send me your amount of VRAM, and how much it tries to allocate? (the error)
That is way too large. I will try writing a script to separate and reconstruct in chunks.
> i have 8gb. The dimensions of the png I'm trying to convert are 512x71680px. `torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.36 GiB (GPU 0; 8.00 GiB total...
I... see... nothing...? If its interesting enough then sure I could try looking into it. But anyone's free to fork this repo and implement new features.
Quick example of how to use attachments with GPT-4 Vision: ```python from UnlimitedGPT import ChatGPT session_token = "token_goes_here" api = ChatGPT( session_token=session_token, clipboard_retrival=False, # WSL model=2 # GPT-4 ) x...
and so...