Jesse Onland

Results 6 comments of Jesse Onland

Or just auto-identify?

The simplest thing along these lines might be to make wands into staves which recharge very slowly.

p 370, par 4: no at all association -> no association at all p 371, par 3: illustrats -> illustrates p 372, par 2: frequenct -> frequency

Another error: for e), when λ = 0 and the penalty term becomes irrelevant, g will be a function which completely interpolates the training data, since the minimization is over...

Wailing, gnashing my teeth, rending my clothing in the streets because `coalesce(across(...))` still doesn't work.

I'm a little surprised there's apparently no way to just apply one of the regression metrics to a classification model's predicted class probabilities.