Results 33 comments of Wizyoung

@fariquelme I don't think YOLO is not sensible to unbalanced classes. Otherwise the author would not design the ignore_mask in the loss function. I've said that in readme file "These...

@fariquelme Yes, I guess the author implemented it wrong or did not tune to a better parameter. In my experiments, alpha is more important than gamma and alpha needs to...

Your data is not clean. Maybe your image path is not valid. Check it yourself.

your data is not "clean". perhaps some boxes are out of the image? Many people met this problem before, please take a search

I think that might be sort of difficult.

can you leave your wechat number for communication?

That's weird because I'm also using Python 3.6.7 on Ubuntu.

可以参考[gluon-cv](https://github.com/dmlc/gluon-cv/tree/master/scripts/detection/yolo)增加数据增强。另外,gluon的损失函数和原版也有点小差别,有训练需求的可以参考下

这篇工作是我去年在百度实习完成的,我来解释下为什么大家复现出来oulu上效果比较差。这种挖掘细微纹理的生成模型,其实对分辨率比较敏感,像oulu这种高清手机大图,crop人脸后再resize其实会丢失点太多纹理细节。因此论文对应的指标,在输入时没有对crop后的人脸进行暴力resize,而是在原分辨率下随机取224的patch后进行训练,测试时把人脸区域resize到边长是32倍数(不然res18 downsample会有问题)后直接整个输入,这样尽量保证纹理细节不丢失。siw比较简单,其实用不用patch结果差不多,因为它resize后那些翻拍特征还是肉眼上很明显的。 另外,我看这个复现的可视化效果和我当时实验挺像的,大家其实有注意到生成的cue上会有一些不完美的亮斑,这是由于instance norm造成的,可以看看style gan2专门针对这个问题提出了解决方案。