GuardSkill
GuardSkill
The initializers dict don't have the weight key
Thanks for your reply, I have solved this problem. This issue is raised by the Conv's weight inferred by the conv operation in the forward process, I modify the `op_code_generator/Conv.py...
Me too, we alter code to make a multi-label binary segmentation, but the per pixel in binary preds has the same values.
but the outputs in the forwarding process are not a single one, can it add into the forwarding process like this? `self.output_align(inputs,outputs)`
maybe should be these?`` def total_variation_loss(image,mask): hole_mask = 1-mask loss = torch.sum(torch.abs(hole_mask[:, :, :, :-1]*(image[:, :, :, 1:] - image[:, :, :, :-1]))) + \ torch.sum(hole_mask[:, :, :-1:, :]*(torch.abs(image[:, :,...
More seriously, it should be these code rather than above (Above code didn't consider the uppest/leftest dilated pixel minus operation) ``` def dialation_holes(hole_mask): b, ch, h, w = hole_mask.shape dilation_conv...
> It is possible to convert the custom ops to regular ops. StyleGAN's authors have implemented this. You can pass `impl='ref'` to each call of e.g. [this function](https://github.com/zsyzzsoft/co-mod-gan/blob/34e31a80c304d6681377007935ed7c08aa650fe8/dnnlib/tflib/ops/fused_bias_act.py#L34). Very thanks...
nice job!!! same issue
我用1.7.3做rknn3399Pro的转换也报同样的错误
I just update the torch version in 1.7.3 docker and solve this problem: `pip install torch==1.9` The RKNN docker never considers the torchscript input model.