dgl-lifesci
dgl-lifesci copied to clipboard
Result not reproducible for MPNN on FreeSolv dataset
Hi there,
I ran the command python regression.py -d FreeSolv -mo MPNN -f attentivefp
and python regression.py -d FreeSolv -mo MPNN -f canonical
in examples/property_prediction/moleculenet. However, the results mentioned in the table is not reproducible.
The performance I obtained is as follows
model | featurizer | Val RMSE | Test RMSE |
---|---|---|---|
MPNN | attentivefp | 2.614 +/- 0.891 | 2.476 +/- 0.412 |
MPNN | canonical | 5.673 +/- 1.096 | 3.716 +/- 0.723 |
We did not fix the random seeds for these scripts. Also some underlying operator implementations can be inherently non-deterministic. If you really want to reproduce the results, you may run for a few more times and see if you can get closer results. Alternatively, just use the pre-trained models with -p
.
@mufeili I have ran 10 runs and included the standard deviation as seen above and the value reported MPNN+canonical in the GitHub is not in the +/-2std. Not sure how is it that a score of 1.x rmse is obtainable.
@mufeili I have ran 10 runs and included the standard deviation as seen above and the value reported MPNN+canonical in the GitHub is not in the +/-2std. Not sure how is it that a score of 1.x rmse is obtainable.
Try re-invoking a hyperparameter search on your side and see if you can get better results. It's possible that across different random seeds or even hyperparameter searches you can get very different results. This is particularly the case for FreeSolv, the smallest dataset in MoleculeNet. You might get more stable results with k-fold cross validation rather than a single train/val/test split.