Inside-Deep-Learning icon indicating copy to clipboard operation
Inside-Deep-Learning copied to clipboard

some typos in ch 2

Open murphyk opened this issue 3 years ago • 2 comments

-p47 you write W_{d,c} instead of W^{d,c}

  • p46 Your comment about Y_pred.ravel() could have been made earlier on p40 where it was first introduced

murphyk avatar Jan 05 '23 21:01 murphyk

Also on p56-p57 there is an inconsistency between y-hat meaning logits (colored equation for loss(sm(yhat), y)) and y-hat meaning probabilities (code snippet, y_hat = F.softmax(logits)). Maybe call the latter p_hat?

murphyk avatar Jan 05 '23 22:01 murphyk

Wow,murphyk is also reading this book ? so cool

I am confused about the notation, too. on P56 there is a:

loss(yhat,y) = -log(sm(yhat), y))

accordinng to the text, yhat is a vector and y is an index. so yhat is a vector and y is a scalar ?

it is so weird to say a loss between a vector and a scalar because loss often means to the things which are of same level?

And, what does sm(yhat), y) mean ? the prob of yth element ?

if it is , this is confusing. what does yth mean?

if not, there is an inconsistency (since On P53, sm(x)i means the prob of ith element )

@EdwardRaff

yebangyu avatar May 25 '23 13:05 yebangyu