PADME
PADME copied to clipboard
Cannot predict properly
Hello, I updated your latest code, and then used the davis dataset you provided to build two scripts for training and forecasting. The training can be done normally, but I predict that I got an unexpected error. Here is the details. the process of
This is the script code I trained:
CUDA_VISIBLE_DEVICES=2
spec='python3 driver.py --dataset davis \
--model graphconvreg \
--prot_desc_path davis_data/prot_desc.csv --arithmetic_mean \
--model_dir ./model_dir_tmp3 --plot --aggregate toxcast --csv_out ./outs \
--intermediate_file intermediate_cv2.csv '
eval $spec
Then I use the following script code to make predictions:
CUDA_VISIBLE_DEVICES=2
spec='python3 driver.py --dataset davis --prot_desc_path davis_data/prot_desc.csv \
--model graphconvreg \
--model_dir ./model_dir_tmp3 --predict_only --restore_model \
--csv_out ./preds_all_tc_graphconv.csv '
eval $spec
But I predict that I get the following error:
Traceback (most recent call last):
File "driver.py", line 718, in <module>
tf.app.run(main=run_analysis, argv=[sys.argv[0]] + unparsed)
File "/home/zh/anaconda3/envs/deep2.0.0/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "driver.py", line 281, in run_analysis
prediction_file=csv_out)
File "/project/git3/PADME2/dcCustom/molnet/run_benchmark_models.py", line 191, in model_regression
restore_model=restore_model)
File "/project/git3/PADME2/dcCustom/models/tensorgraph/graph_models.py", line 84, in __init__
super(WeaveModel, self).__init__(**kwargs)
File "/project/git3/PADME2/dcCustom/models/tensorgraph/tensor_graph.py", line 100, in __init__
super(TensorGraph, self).__init__(**kwargs)
File "/project/git3/PADME2/dcCustom/models/models.py", line 59, in __init__
assert os.path.exists(model_dir)
AssertionError
But in fact, this ./model_dir_tmp3
folder exists, that is, my custom --model_dir
is not used.
Then I renamed the model_dir_tmp3
folder to model_dir
and executed the following prediction script:
CUDA_VISIBLE_DEVICES=2
spec='python3 driver.py --dataset davis --prot_desc_path davis_data/prot_desc.csv \
--model graphconvreg \
--model_dir ./model_dir --predict_only --restore_model \
--csv_out ./preds_all_tc_graphconv.csv '
eval $spec
But I got the following error again:
Traceback (most recent call last):
File "driver.py", line 717, in <module>
tf.app.run(main=run_analysis, argv=[sys.argv[0]] + unparsed)
File "/home/zh/anaconda3/envs/deep2.0.0/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "driver.py", line 281, in run_analysis
prediction_file=csv_out)
File "/project/git3/PADME2/dcCustom/molnet/run_benchmark_models.py", line 195, in model_regression
model.predict(train_dataset, transformers=transformers, csv_out=prediction_file, tasks=tasks)
File "/project/git3/PADME2/dcCustom/models/tensorgraph/tensor_graph.py", line 648, in predict
self.restore()
File "/project/git3/PADME2/dcCustom/models/tensorgraph/tensor_graph.py", line 1071, in restore
saver = tf.train.Saver(var_list=var_list)
File "/home/zh/anaconda3/envs/deep2.0.0/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1239, in __init__
self.build()
File "/home/zh/anaconda3/envs/deep2.0.0/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1248, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/zh/anaconda3/envs/deep2.0.0/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1272, in _build
raise ValueError("No variables to save")
ValueError: No variables to save
I am sure that my training is completely normal, and the model training files are saved, like the following
But can you tell me where is wrong?
Hi, I apologize that my code is currently a bit dirty. You're welcome to make changes, but if you don't, please do as what I say, and I will try to see whether it is possible to refactor it soon. My workstation needs some set up before I can resume working on it, so it might take some time for me.
For now, if you want to enable --predict_only
, you MUST specify its dataset to be nci60
in the bash script. You might need to manually tweak molnet/load_function/nci60_dataset.py
so that it suits your need.
Make sure that your working directory is correct. You might need to insert a line pdb.set_trace()
inside the models.py
script to see what is the directory immediately before the problem occurs.
If you still have any problems after changing it to nci60
, please ask me again.