mxkai
mxkai
Hi, I have a question about Celeba-HQ. Can you tell me how the dataset divides the training set and the test set
I want to calculate the scores of the original label,but I meet some difficulties. I use the script of [cityscapes](https://github.com/mcordts/cityscapesScripts) to generate file "_gtFine_labelTrainIds.png", and then replace the FCN output...
I want to train photo2label on cityscapes,Can you tell me whether the training label is "_gtFine_color.png" or "_gtFine_labelTrainIds.png"? _gtFine_color.png:  _gtFine_labelTrainIds.png: 
Is the semantic color of the generated image random? Whether it can correspond to the label?
I wanted to evaluate small-scale images, but there was a problem: could not convert BatchDescriptor {count: 1 feature_map_count: 2048 spatial: 0 1 value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX} to cudnn...