MultiAgentPerception
MultiAgentPerception copied to clipboard
questions about who2com.
I find that the method of fusing the features is not the same as in the paper that relies on the attention mechanism score to select the features of the corresponding agent to fuse . In the code, during training, the fused features are obtained directly using the attention mechanism, and it seems that the returned agent number, i.e., action_argmax , is not used subsequently.
https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/trainer.py#L391 And you can see that the use of torch.argmax during training does not match the thesis https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/models/agent.py#L625-L627
https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/trainer.py#L384C1-L398C1
And instead of using commun_label during training, it was used as a metric during validation and testing.
I feel that the log_action, action_argmax are really meaningless, and the features obtained during training are not all the features of a vehicle, but are obtained by using the attention method. Instead of selecting all the features of a certain car, in the test, new features are added after the attention is used.
https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/models/agent.py#L629C1-L651C56