MolCLR
MolCLR copied to clipboard
Some questions about finetune
Recently I came across some papers on molecular contrastive learning, and it is my great pleasure to find a paper written by your team, named Molecular Contrastive Learning of Representations via Graph Neural Networks. This paper has benefited me a lot. But when I use the pre-trained model you provided for downstream tasks with the default configuration file config_finetune.yaml, the performance of the model can never reach the one shown in the paper. So I would like to ask if you can provide the hyperparameter configuration files required for downstream tasks on each data set.
hi, @Shimmer8001 I finetuned and found the result similar to the paper, with just minor decrease. I think there are some random seed need to set to replicate exactly the same results.
Besides, did you try to pre-train model on larger of other different dataset to improve the finetune?
Same here, I find that I cannot reproduce the results shown in the paper on many datasets.
hi, @danielkaifeng I can't reproduce the results shown in the paper, too. I guess maybe the hyperparameters and random seed I set are not suitable. I wonder if you can share the hyperparameters and random seed. Thank you very much.