algorithmic-efficiency icon indicating copy to clipboard operation
algorithmic-efficiency copied to clipboard

Split out train and update_batch_norm in Librispeech workloads

Open znado opened this issue 3 years ago • 0 comments

Currently update_batch_norm just runs the librispeech workloads in train mode, which also runs dropout in train mode. The purpose of having separate mode and update_batch_norm kwargs to model_fn() was so that submitters could separate which they want to update, if desired. We can update Conformer.__call__ and Deepspeech.__call__ to take both train and update_batch_norm and pass them to dropout/BN respectively.

znado avatar Oct 08 '22 17:10 znado