max
max copied to clipboard
performance showcase on x86_64 wsl Ubuntu did not complete as expected
Bug description
Going over "Getting started", performance showcase ran for about 6 hours on x86_64 wsl Ubuntu 22.04.4. It displayed qps for Tensorflow and PyTorch as expected but MAX engine output 0.01 qps.
Steps to reproduce
Starting inference throughput comparison
----------------------------------------System Info---------------------------------------- CPU: Intel(R) Xeon(R) CPU E5-1660 0 @ 3.30GHz Arch: X86_64 Clock speed: 3.3000 GHz Cores: 12
Running with TensorFlow .......................................................................................... QPS: 4.01
Running with PyTorch .......................................................................................... QPS: 4.93
Running with MAX Engine Compiling model. Done! .......................................................................................... QPS: 0.01
====== Speedup Summary ======
MAX Engine vs TensorFlow: Oh, darn that's only 0.00x stock performance. MAX Engine vs PyTorch: Oh, darn that's only 0.00x stock performance.
Hold on a tick... We normally see speedups of roughly 2.50x on TensorFlow and 1.20x on PyTorch for roberta on X86_64. Honestly, we would love to hear from you to learn more about the system you're running on! (https://github.com/modularml/max/issues/new/choose)
System information
- What OS did you do install MAX on ?
x86_64 wsl Ubuntu 22.04.4
- Provide version information for MAX by pasting the output of max -v`
max 24.1.0 (c176f84d)
Modular version 24.1.0-c176f84d-release
- Provide version information for Mojo by pasting the output of mojo -v`
mojo 24.1.0 (c176f84d)
- Provide Modular CLI version by pasting the output of `modular -v`
modular 0.5.1 (1b608e3d)
Please check out this explainer
@ehsanmok's point is a good one for contextualizing performance difference on consumer hardware, but 6 hours is extreme -- something is wrong here.
Is this experience reproducible @tstoyc? Also, out of curiosity, are you running WSL v1 or v2?
Also, it goes without saying, but thank you for filing this @tstoyc! We really do appreciate it 🙂
Please re-open if you continue to have issues!