NiftyNet icon indicating copy to clipboard operation
NiftyNet copied to clipboard

Modify docs about smaller_final_batch_mode

Open fepegar opened this issue 6 years ago • 0 comments

The smaller_final_batch_mode parameter is ignored during training as the dataset is "infinite" (samples yielded by a generator). Dropping a smaller batch won't happen. During inference, users will rarely want to specify if they want to specify whether they want to use pad or dynamic.

Should this parameter be completely removed from training config and hardcoded to pad for inference?

@wyli

fepegar avatar Jun 05 '19 11:06 fepegar