scatteract
scatteract copied to clipboard
Pretrained scatteract models?
Is your feature request related to a problem? Please describe. Given the out-of-date versions of tensorflow, tensorbox, etc., I have found it challenging to retrain the models and recreate the results of the paper.
Describe the solution you'd like Is there any chance the authors could release (either here or via some link) the pretrained models?
Describe alternatives you've considered
I am in the process of setting up all of the CUDA (8), tensorflow (0.12.1), python2.7 (near end of life!), etc. in a Docker, and, hopefully, I can successfully train the three models, but it may no longer be possible without sweeping code changes. E.g., this change was necessary just in moving from tensorflow==0.10 to 0.12.1, because the tensorflow==0.12.1 LSTM/RNN code defaults to state_is_tuple=True
:
diff --git a/tensorbox/train_obj_model.py b/tensorbox/train_obj_model.py
index 76e5715..e18ab1b 100644
--- a/tensorbox/train_obj_model.py
+++ b/tensorbox/train_obj_model.py
@@ -30,13 +30,15 @@ def build_lstm_inner(H, lstm_input):
'''
build lstm decoder
'''
- lstm_cell = rnn_cell.BasicLSTMCell(H['lstm_size'], forget_bias=0.0)
+ lstm_cell = rnn_cell.BasicLSTMCell(H['lstm_size'], forget_bias=0.0, state_is_tuple=False)
if H['num_lstm_layers'] > 1:
- lstm = rnn_cell.MultiRNNCell([lstm_cell] * H['num_lstm_layers'])
+ lstm = rnn_cell.MultiRNNCell([lstm_cell] * H['num_lstm_layers'], state_is_tuple=False)
else:
lstm = lstm_cell
There is likely more of this to come as I progress towards getting things running.
Additional context Currently getting segfaults when just training for a few iterations (CPU, not GPU...yet; still need to iron out some wrinkles before paying for GPU time).
It may also be good to link to the generated training data, in case there is any pseudo-randomness that isn't seeded with a consistent value -- so that results will be exactly comparable to the paper. I did, however, successfully generate the training data.
It may also be good to link to the generated training data, in case there is any pseudo-randomness that isn't seeded with a consistent value -- so that results will be exactly comparable to the paper. I did, however, successfully generate the training data.
Hi, dnm1977. I am trying to replicate the result of the work. May I ask you some questions about this task? Here is my email: [email protected]. I appreciate it if you reply to me.
@dnm1977 were you able to train the models on your generated training data?