vadim epstein

Results 36 comments of vadim epstein

@johndpope alas, that mega.nz storage is not permanent, expired in a while, leaving only 10gb

pardon for not looking into answered issues; found [this](https://github.com/vlomme/Multi-Tacotron-Voice-Cloning/issues/6) can you elaborate a bit on why it's not suitable? (or better what should be done to use it)

is there specific recommendations about lambda_ds in general? should it be equal to the domain count (minus one?)? training 7 domains with lambda_ds=2 results in rather subtle variety

@iPsych i have trained ~dozen models with 7-8 domains on two configs: `img_size 512 batch 1` and `img_size 256 batch 3` (these are maximum batchcounts for original code on 11gb...

try smaller batching as an example, i've trained 512x512 model on 8gb gpu with batch_size=1

@LeonJWH alas, performance-wise batch size seems to be quite important. i've ended up at size 256 and batch 4 (the most i could get on 11gb geforce card) - the...

@doantientai i've added `del x_fake2, y_org, y_trg, s_trg2, s_trg, s_pred, out` before [this line](https://github.com/clovaai/stargan-v2/blob/2a74f3dcd7ad3d1f7f018e7d97bd7622787acc16/core/solver.py#L259), that's probably all

thanks for info! alas, i'm on windows, and it's not that straighforward to run docker with gpu support on it (afaik, it still requires some special windows version from the...

@ppries latent dimensions in progressive networks depend on the network resolution: 18 is for 1024x1024 models (such as ffhq), 14 is for 256x256 (such as bedrooms). you need to change...