Bekhzod

Results 13 comments of Bekhzod

I had the same issue when I wanted to load the model in 8bit. Loading the model in 4bit solved my problem. load-in-4bit=True

@NiuDaVinci Try to run the code with a smaller mini-batch size

Hi, @marco54654! Can you please share the downloaded pretrained models (unet_128_standard & unet_512_standard)? I cannot download them since the provided link in the repo does not function properly :(

Hi, guys! @taizan, @kanekirsten07, @Ulf3000, @6DammK9, @vhall413, could anyone please share the downloaded pretrained models (unet_128_standard & unet_512_standard)? I cannot download them since the provided link in the repo does...

Hey guys, @karolzak, @Abbsalehi! Could you please provide a link for the "e-commerce" dataset in described in the blog? The provided link in Kaggle does not seem to exist anymore...

Oh, I guess the reason was the argument to the Upsamle2 function (ngf * mult) is a very big number so it raised an error. Changing it to scale_factor of...

@wzh0328, because when model or data is transformed to GPU (model.cuda() or data.cuda()) it will be on the GPU 0 by default. If you want to change the GPU, the...

@AlessandroMondin, thank you very much for your reply! As you mentioned, debugging __getitem__ function solved the problem. It was due to box_format. By default, box_format is "coco" and changing it...

@AlessandroMondin I am afraid these weights are for v8 (below ones for v7). When I tried these weights it threw an error as below. ![image](https://user-images.githubusercontent.com/50166164/233308584-ac39c347-9951-4c7e-92ae-46801171c205.png) I though it is because...