algorithmic-efficiency icon indicating copy to clipboard operation
algorithmic-efficiency copied to clipboard

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.

Results 62 algorithmic-efficiency issues
Sort by recently updated
recently updated
newest added

Todo - [x] run jax workloads - [ ] run pytorch worklaods - [ ] finalize numbers - [ ] attach data of before and after - [ ] update...

This is for the LM workload.

CIFAR workload error resolution: #889

The cifar dataloader no longer works properly with jax algorithms using jax.jit. I did not test to see if pytorch algorithms still work with cifar. ## Description When running jax_nadamw_full_budget.py...

* Implementation: Google AutoML's Lion * Tunable hyperparameters: learning_rate, weight_decay, one_minus_beta1, beta2 * Fixed hyperparameters: warmup_factor

⚡ Submission

Speech workloads appear to be ~5x slower in update then before. Happens with pmap and jit. ## Steps to Reproduce in container run ``` python submission_runner.py --framework=jax --workload=librispeech_deepspeech --submission_path=reference_algorithms/qualification_baselines/external_tuning/jax_nadamw_target_setting.py --data_dir=/data/librispeech...

V2 Self-Tuning Budget for ResNet is Half of Benchmark Schedule-Free Run Time ## Description I reran schedule-free adamw with the new self-tuning budgets to ensure that all workloads could reach...

We frequently build the latest Docker image for this project as part of our workflow, and I’d like to suggest that we publish this image to Docker Hub to support...