pycox icon indicating copy to clipboard operation
pycox copied to clipboard

Leave one out

Open arjunrajanna opened this issue 2 years ago • 1 comments

@havakv Hope you're well. Many thanks for still continuing to answer questions on this repo. A relatively quick question I had was the difference between cross-validation (CV) and leave one out (LOO) in survival model setting. For example, leveraging the probability predictions from a trained CV vs in LOO are quite drastically different. The c-index is pretty good in CV hence the predictions also, but the LOO is quite poor when I look through some downstream work. Shouldn't these probabilities be similar? I'm not as familiar with prognostic LOO. Also any thoughts on how to validate the LOO? I realize AUC is a common way to do so. Below are a few lines in case I'm doing something in-accurate. Your time and thoughts are greatly appreciated. Thank you very much!

    # LOO 
    net = MLPVanillaCoxTime(in_features, num_nodes, batch_norm, dropout)
    model = CoxTime(net, tt.optim.Adam, labtrans=labtrans)
    log = model.fit(train_x, train_y, batch_size, epochs, callbacks, verbose=True) # approach 1
    # log = model.fit(train_x, train_y, batch_size, epochs, callbacks, val_data=val.repeat(5).cat(),
                    val_batch_size=batch_size, verbose=True) --> approach 2 split train into train and val
    _ = model.compute_baseline_hazards()
    surv_train = model.predict_surv_df(train_x)
    net.eval()
    with torch.set_grad_enabled(False):
        surv_test = model.predict_surv_df(test_x)

And

    # CV 
    log = model.fit(train_x, y_train, batch_size, epochs, callbacks, val_data=val.repeat(10).cat(), val_batch_size=batch_size, verbose=False)
    _ = model.compute_baseline_hazards()
    surv_train = model.predict_surv_df(train_x)
    surv_val = model.predict_surv_df(test_x)

arjunrajanna avatar Jul 29 '22 18:07 arjunrajanna

Hi, and thank you for the kind words. I agree that the two approaches should produce similar results. However, I guess the non-parametric baseline hazard might be a bit different, especially if your dataset is not large. Maybe you could compare those between the approaches?

Another thing that might differ is cases where you have identical covariates for multiple individuals (don't know if that is the case for you). For the LOO you their predictions would slightly differ, while for CV they would be identical if they are in the same fold. Depending on you data this might affect the C-index.

Other than that, I'm sorry that I can't be of more assistance, is I don't really have much experience with LOO myself.

havakv avatar Aug 01 '22 07:08 havakv

@havakv Thank you very much for your suggestions. Did some tests based on your inputs. The dataset is not very large. What I find on this is that the models don't matter so much with regards to risks predicted. The two approaches don't produce similar results. Feature selection also results in the same. It gets me to think the size of the dataset is probably crucial for this case. Thank you again for your time!

arjunrajanna avatar Aug 25 '22 15:08 arjunrajanna