Basu Jindal

Results 39 comments of Basu Jindal

> I had success at last, after rebooting the laptop, with --precision full. But I can do only one sample each time (I think is pretty good for my configuration)...

Hi, you can specify GPU as `--device cuda:1` or `--device cuda:0` in the CLI version. In the case of GUI, just mention `cuda:1` or `cuda:0` in the webUI textbox.

Hi, you can use a larger batch size to reduce the inference time per image. Although I am working on reducing the inference time further, it's not very straightforward. Hoping...

Hi, I have added an optional argument `--turbo`. Using it will reduce the inference time to 25 sec per image for txt2img and 15sec per image for img2img (excluding the...

Hi, you can use the `--small_batch` flag. Currently, the model sends the images to the UNet model one by one irrespective of the `--n_smaples` value; using this flag will change...

Hi, I have added an optional argument `--turbo`. This is most effective when using a small batch size. It will reduce the inference time to 25 sec per image for...

I am getting a 403 forbidden error: ``` HTTP request sent, awaiting response... 403 Forbidden 2023-03-02 05:39:58 ERROR 403: Forbidden. ``` Any suggestion would be appreciated? : /

> > I am getting a 403 forbidden error: > > ``` > > HTTP request sent, awaiting response... 403 Forbidden > > 2023-03-02 05:39:58 ERROR 403: Forbidden. > >...

> > > > I am getting a 403 forbidden error: > > > > ``` > > > > HTTP request sent, awaiting response... 403 Forbidden > > >...