FAU_CVPR2021
FAU_CVPR2021 copied to clipboard
action units maps extraction
hi,
can you please confirm that gh2
is the attention maps?
https://github.com/rakutentech/FAU_CVPR2021/blob/0bfb778526908f36b6136e836d8b382877bacfa4/inference.py#L53
it is of size (batch_size, 12, 12, number_action_units)
. here, number_action_units=12
.
attention maps are output of the arrow in this fig.3 from https://openaccess.thecvf.com/content/CVPR2021/papers/Jacob_Facial_Action_Unit_Detection_With_Transformers_CVPR_2021_paper.pdf
when plotting one of the attention maps, i am supposed to see something similar to fig1, right?
i run python inference.py
and extracted gh2
. i plotted all gh2[0, :, :, i] for i in range(12), right to the image. but i am seeing something strange.
below are the plots from 0 to 11.
here are the unique values per map:
map: 0: [1.]
map: 1: [0. 0.9982668 1. ]
map: 2: [0. 1.]
map: 3: [1.]
map: 4: [0.]
map: 5: [0. 1.]
map: 6: [1.]
map: 7: [0.]
map: 8: [0.]
map: 9: [0.0000000e+00 2.0861626e-07 9.4047523e-01 1.0000000e+00]
map: 10: [0.]
map: 11: [0.]
also strange. the sigmoid could be doing this. but with or without sigmoid, i am supposed to get attentions that point to rois. can you help? i may be missing something. can you show how did you plot attentions in fig1? very much appreciated. thanks