adanet
adanet copied to clipboard
`input_fn` called multiple times in `Estimator.train`
https://github.com/tensorflow/adanet/blob/712bc8efbcce4684cc81108ad916a735cddb4de2/adanet/core/estimator.py#L896-L900
It seems to be problematic because adanet.Estimator.train would load data from scratch at every iteration.
As https://github.com/tensorflow/tensorflow/issues/19062#issuecomment-400129963 said, in canned TF estimators train is called once.
There seem to be two negative effects of this:
- A repeated Dataset (
dataset.repeat(10)for example) cannot stop training viaOutOfRangeErrororStopIteration, we have to setstepsormax_steps, which is inconsistent with canned Estimators. - If a user doesn't shuffle the dataset, AdaNet may repeatedly use the first
max_iteration_steps * batch_sizesamples each time, thus fitting to a subset of the training data.
Am I right? @cweill
@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the max_iteration_steps to be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).
Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.
@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the
max_iteration_stepsto be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.
Great! Thanks for the explanation. However, that's sort of not handy to do the math, imagine someone wants to replace the DNNClassifier in her application into adanet.Estimator, there may be lots of work. Will you have a plan to improve this? Or will the Keras version avoid the same situation?
@cweill
@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the
max_iteration_stepsto be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.
From the tutorials:
max_iteration_steps=TRAIN_STEPS // ADANET_ITERATIONS,
If I want to train with 100 epochs over one Adanet iteration, meaning num_examples/batch_size steps per epoch, should I set max_iteration_steps to that value?
I have a sample size of 5265 and batch sizes of 50, so I have about 105 update steps per epoch. Should my max_iteration_steps be 10500?
Pinging