alexliyang
alexliyang
FFHQ_eye_mouth_landmarks_512.pth 这个是FFHQ的数据集的landmark,如果要训练自己的数据集,这个如何生成?比如用CeleBA 数据集
layer_bottleneck = self.DenseBlock(15,408, layer_5a_down, enc_model_layers, 'layer_bottleneck') # m = 348 + 5*12 = 408 here DenseBlock(15,408... is 15 or 5 ? when we count m = 348 + 5*12 =...
pytorch 模型中的常量数组保存到onnx模型,然后通过model convert工具转换为mnn模型, 在native上调用mnn进行推理,产生Acquire buffer size = -1237319680 问题,导致推理的结果有错误。mnn版本为1.2版本。 2个常量数组长度各为48个float32类型数据, 在运行中reshape为[3,1,1,16] 和[1,3,1,1,16] 转换onnx 到mnn的log: [2022/02/17 08:22:46.541][INFO][11304] -- convert receive data c { code: 0, msg: '0', type: 'finish' } [2022/02/17...
just like total_loss = gender_loss + age_loss + L2_loss use AdamOptimizer optimizer but the gender accuracy is up to 0.90+, and age accuracy is only about 0.50+ how to solve...
thank you for opening your code. I croped some faces , and how to generate the training dataset?
你好,阅读了您的代码后,有个想法希望沟通一下: 从训练的时候看lut表关系大致表述为: result = w1 * lut1(img) + ...+ wn*lutn(img) ------1 从demo_val中看后面的表述关系大致为: blendlut = w1*lut1+...+wn*lutn result = blendlut(img) -------2 假如上面的1和2等价的话, 那么是否就是说 如果训练足够多,应该就能用一个lut表来表征呢? 另外一个问题是:如何评价训练结果,即什么时候可以中止训练? 期待您的答复 谢谢
lut表中的负值
如何保证lut表中的值在[0,1]之间? 而且还存在 组合后的lut表的值是否在[0,1]之间,当不在这个区间时,特别是负值时,应该如何处理?
thank the great work how to pack three lut models and classifier model then use C++ to get one image? thanks
when I use paper parameters to train my task , the result is not good enough. how to use NAS and NetAdapt to optimize mobilenet block and layer to adapt...