DeepLearningFlappyBird icon indicating copy to clipboard operation
DeepLearningFlappyBird copied to clipboard

The final loss gradient is 1D but network output is (1,2). How is the gradient propagated ?

Open prateethvnayak opened this issue 5 years ago • 0 comments

I was wondering if the tf.reduce_sum and y are 1d and the mse cost term is 1d, however the gradient to be propagated needs to same dimension as network output i.e (1,ACTIONS) = (1,2). Is the final loss grad just replicated in both dimension ? i.e (1,1) -> (1,2) ?

prateethvnayak avatar Dec 18 '19 17:12 prateethvnayak