Egil Martinsson
Egil Martinsson
There may be a lot of reasons for numerical instability as pointed out, so would be very helpful if we can find a GPU/CPU reproducible example. Can it have anything...
Sorry for the slow answer, and thanks for the great question. I replied in the blog too http://disq.us/p/1mghgb2. Geist is, censoring indicator should be considered part of the training data...
You have a very valid question, censoring is a hard concept and the reason why it should work has puzzled me and why I've put in so much effort to...
1. *why not use some random crap value here also?* - I am! Just crappy but numerically reasonable. I can spot 0.95 when I'm testing and expected TTE is unlikely...
Great to hear! Looking forward to hear your results. Also, check out #33, almost always there's truth being revealed (i.e data problem) when loss turns NaN!
Great @FedericoNutmeg and relevant to https://github.com/ragulpr/wtte-rnn/issues/51. Currently (on develop+master) it looks like https://github.com/ragulpr/wtte-rnn/blob/26612657ee0b14fa1a33f8da6ed28018e27cbe98/python/wtte/wtte.py#L169 Where `epsilon` is `K.epsilon()` which I think defaults to whatever's in your `.keras` json. I suggest changing...
Hi there, thanks for contributing! I added some comments on PR https://github.com/ragulpr/wtte-rnn/pull/59 Otherwise I recommend reading Proposition 2.26 and chapter 3.1.1 and 3.2 in particular shows some alternative forms of...
Great suggestion and many people have been asking. So it's on the TODO but I haven't focused on it as it's a fully observed (non-censored) dataset and it's pretty straightforward...
@erankor, thanks for sharing! Haven't tried it yet but I inlined **erankors** patch below for those wary about downloading zips 🙂 ``` completer-s3-paths.zip completer-s3-paths.patch --- completer.py 2019-08-12 11:01:24.536333508 +0000 +++...
Hello @janeyx99 @malfet @seemethere ~@ZainAlt~ @ZainRizvi I saw your work on https://github.com/pytorch/pytorch/pull/78682 and more! Another iteration? 😃