Egil Martinsson
Egil Martinsson
That sounds very reasonable thanks for the description! I personally don't like precision/classification based metrics as they are fully dependent on the calibration of the probabilities and the threshold you...
Hi there, There's actually good ways to model Bimodal hazards. The sum of cumulative hazard functions is a valid cumulative hazard function, see for example [this paper](https://openreview.net/forum?id=SyG4RiR5Ym). So if `S(Y)...
> Log-likelihood of the mixture is u*log(p_mix(t)^u * s_mix(t+1)^(u-1)) The PMF of the mixture becomes p_mix(t) = r*p_1(t) + (1-r)*p_2(t) The SF of the mixture becomes the product of the...
Hi there, Sorry for the late answer but this is very much ongoing research 😅. I think what you are talking about is doing asynchronous updates whenever we get that,...
Thanks! Please keep posting questions and good luck!
Hi thanks for comment! Do you have a reproducible example? I've never used `pad_sequences` myself. In any case (when it's working) mask layer will multiply loss function by 0/1 mask...
Since WTTE 1.1 the loss is [clipped per default](https://github.com/ragulpr/wtte-rnn/blob/e4e2343575c10876abc6ab91bd7a0eaeb7743d31/python/wtte/wtte.py#L222) to 1e-6 so NaN is synonymous. Working from the assumption that it's a NaN-problem, there's alot of reasons for it, see...
Thanks for the detailed response. You seem to have discrete nice data with very little censoring. Here's some thoughts!: 1. *Is there a value for the mean uncensored for which...
You can try: ``` model.add(TimeDistributed(Dense(50, activation='Relu'))) model.add(TimeDistributed(Dense(200, activation='Relu'))) ## Preferably TanH model.add(Dense(2)) model.add(Lambda(wtte.output_lambda, arguments={"init_alpha":init_alpha, "max_beta_value":4.0, "scalefactor":1/np.log(200)}) ####### Note scale it by pre-outputlayer width ``` Even though Relu will be inherently...
Also, since you don't seem to care about history