senius
senius
`None` in `nn.ModuleList` break the JIT in higher version of PyTorch. This problem is resolved by this issue. https://github.com/pytorch/pytorch/issues/30459.
Hi @gsrujana, this has not yet been tested on a live camera feed. But there is a good [open source](https://github.com/stefanopini/simple-HRNet) based on video cam, HRNet and yolov3. You can replace...
We present a [colab demo](https://colab.research.google.com/drive/1v2LY_rAZXqexPjiePmqgma4aw-qmRek6?usp=sharing) for estimating multi-person poses in an image.
You can take the heatmap values of the maximum activation positions as the confidences for the keypoints.
Can you show more details about this bug? I have not encountered this problem.
I will deal with this problem later.
``` self.alpha_normal = torch.randn(k, num_ops) self.alpha_reduce = torch.randn(k, num_ops) ``` mill make `self.alpha_normal` and `self.alpha_reduce` always be torch.floatTensor, somtimes causing error with `model.cuda()`, this is a little trouble. maybe ```...
Yeah, you get it . @zh583007354 ```python self.alpha_normal = nn.Parameter(torch.randn(k, num_ops)) def filter(model): for name, param in model.name_parameters(): if 'alpha' in name: contiue yield param optimizer = torch.optim.Adam([ {'weights':filter(model), 'alphas':model.alpha_normal}])...
I think the necessity of this `clip_grad_norm_() ` is unknowable. Because we can't get the gradient range of the parameters, but this should be done to avoid gradient explosions (just...
Yes. We trained it using pennaction dataset.