Aiden Nibali
Aiden Nibali
Thanks for your interest! Coal is currently a work in progress, and as such somewhat of a moving target, so more docs are planned for the future. However, here is...
I have contemplated writing a more Ruby-like (and hence nicer) language, but the current focus is being low-level as well as static. If I start moving too far towards Rubyesque...
This should just work. Here's an example: ```python >>> unnormalized_heatmaps = torch.randn(1, 2, 7, 7, 7) >>> heatmaps = dsntnn.flat_softmax(unnormalized_heatmaps) >>> heatmaps[0, 0].sum() tensor(1.) >>> heatmaps[0, 1].sum() tensor(1.0000) >>> coords...
I have not done this myself, but you could try to derive a confidence score from the normalised heatmap by quantifying how "spread out" it is.
Please read the [basic usage guide](https://github.com/anibali/dsntnn/blob/master/examples/basic_usage.md). > Importantly, the target coordinates are normalized so that they are in the range (-1, 1). The DSNT layer always outputs coordinates in this...
> In fact, I normalized the heatmap with heatmap = dsntnn.flat_softmax(heatmap) based on the example. Maybe because my torch is 1.2.0? I found that you mentioned in the answer to...
> does the output look right now ? The output is valid, but I can't say whether they are the right answers for your problem :wink:
> Is it because I am now directly using dsnt to find the value on the heatmap previously trained without dsnt? Do I need to retrain with dsnt? I didn't...
I'm not sure---aren't you better off asking the HRNet authors why they are missing predictions? How do you expect evaluation on an incomplete set of predictions to work?
Oh, I think I understand now---test.json is [the processed version of the MPII test set metadata used for HRNet](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/tree/ba50a82dce412df97f088c572d86d7977753bf74#data-preparation). Could it be that their file is for the multi-person prediction...