Gaurav Parmar
Gaurav Parmar
Hi, Unfortunately I have not tested the code on a windows machine. Feel free to document it if you are able to achieve multi-gpu training on windows! I am sure...
We used A6000 GPUs for training all of our models. However it should be possible to train with smaller GPUs if you reduce the batch size, and use other memory...
Ah, this is because for the unet, we set three different lora adapters. This requires using the set_adapter method explicitly. Consider this code block below for example: ``` from diffusers...
We have not uploaded the day2foggy model. -Gaurav
I ran the unpaired training example again today morning and get expected results.   This is the accelerate config file i use: ``` compute_environment: LOCAL_MACHINE debug: false distributed_type: MULTI_GPU...
Hi @tfriedel Based on your dataset and task, you can try training your model on random crops during training time and full resolution at test time. This should enable you...
Yeah, we plan to release the training code as well!
The training code for both the paired (pix2pix-turbo) and unpaired (CycleGAN-Turbo) models is uploaded! Checkout the corresponding docs: [README-paired](https://github.com/GaParmar/img2img-turbo/blob/main/docs/training_pix2pix_turbo.md) [README-unpaired](https://github.com/GaParmar/img2img-turbo/blob/main/docs/training_cyclegan_turbo.md) @soroush-abbasi @radames @LLSean @kulikovv @yj7082126 @Joyies @joansc @DanielG1010 @Justones
We did not investigate this question in this paper.
Yup, the results will not suffer if the LoRa adapter is not added to conv_in and skip_conv layers