algorithmic-efficiency icon indicating copy to clipboard operation
algorithmic-efficiency copied to clipboard

Incorrect Imagenet evals with pytorch_eval_num_workers > 0

Open priyakasimbeg opened this issue 10 months ago • 2 comments

AlgoPerf submitter team reports that they are no longer able to reproduce the NAdam baseline results in PyTorch using the current repo in PyTorch on the ImageNet workloads (both ResNet and ViT). See the plot below in terms of differences in the training/validation loss and accuracy between the given NAdam Jax results and the current run's results on ImageNet ViT.

They did not see a change in OGBG and FastMRI.

The list of commits that we merged were from 389fe3f823a5016289b55b48aa8061a37b18b401 to 79ccc5e860d7928cf896ffe12ec686c72fd840d4.

image

Steps to Reproduce

Running submission runner with eval_num_workers=4 (recently changed default to help speed up evals).

Source or Possible Fix

Setting the eval_num_workers to 0 resolves the discrepancy in evals. We are still investigating why.

priyakasimbeg avatar Mar 27 '24 00:03 priyakasimbeg

Changed default number of workers for PyTorch data loaders to 0. Important update: for speech workloads the pytorch_eval_num_workers flag to submission_runner.py has to be set to >0, to prevent data loader crash in jax code.

priyakasimbeg avatar Mar 27 '24 01:03 priyakasimbeg

I tried reproducing the issue by running the target setting run on the current dev branch with pytorch_eval_num_workers=4, but I don't see the drop in eval metrics compared to an older reference run (this one).

If someone can share the exact command and commit they used to produce the run in the plot I will try to run this instead.

runame avatar Apr 03 '24 17:04 runame