Yili Zhao

Results 31 comments of Yili Zhao

@akanimax I will do image resizing with code below: ``` datadir = os.path.join(args.data, 'images') dataset = datasets.ImageFolder( datadir, transforms.Compose([ transforms.Resize((128,128)), transforms.ToTensor(), ])) ``` - May I ask that how can...

I use pytorch 1.0, and below is my training code: ``` device = th.device("cuda" if th.cuda.is_available() else "cpu") data_path = "data/" def setup_data(): datadir = os.path.join(data_path, 'images') dataset = datasets.ImageFolder(...

Hi @akanimax The `depth` parameter explain is really helpful to me, and I also added `Normalize` following your suggestion. I will let you know when the training completed. Thanks!

@akanimax The training was completed, and one of the last generated sample: ![gen_5_30_366](https://user-images.githubusercontent.com/319796/50714990-f3633180-10b5-11e9-97bc-7caee432722c.png) My dataset has 100 classes, and 60 images every class. The images are all birds, but every...

@akanimax I made those changes to the hyper-parameters : - `num_epochs = 256` - `batch_sizes = 32` - `latent_size = 512` I have 2 1080Ti GPUs, but I can only...

@akanimax The training was still not completed yet (2 days), but encountered a GPU memory error: ``` Currently working on Depth: 5 Current resolution: 128 x 128 Epoch: 1 RuntimeError:...

@akanimax @jiwidi I think GTX 1080 Ti with 11 GB memory still can't support to train on my custom dataset. :disappointed: I will try to decrease `batch_sizes` to 16 and...

@akanimax The sample below was generated with these settings: ``` depth = 6 num_epochs = [256, 256, 256, 256, 256, 256] fade_ins = [50, 50, 50, 50, 50, 50] batch_sizes...

@akanimax There were some errors with this setting: ``` depth = 7 num_epochs = [256, 256, 256, 256, 256, 256, 256] fade_ins = [50, 50, 50, 50, 50, 50, 50]...

I confirmed the segmentation fault issue solved with numpy 1.26.4, open3d installed with pip on windows 11.