DA-RNN_manoeuver_anticipation
DA-RNN_manoeuver_anticipation copied to clipboard
Why Binary Domain Classification with Softmax instead of Sigmoid?
What is the reason of using Dense(2,activation='softmax') instead of Dense(1, activation='sigmoid')?
Is it related to the Gradient Reversal Layer? If so, can you explain?