mlx-examples icon indicating copy to clipboard operation
mlx-examples copied to clipboard

M1 perf. tests. Who has M2/M3 could you run tests and publish results?

Open ivanrylov opened this issue 8 months ago • 14 comments

https://github.com/ivanrylov/mlx-m1-pro-speed-tests/

ivanrylov avatar Dec 17 '23 18:12 ivanrylov

(.venv) user@box Transformer_lm % python main.py Training a transformer with 153.883 M parameters Iter 10: Train loss 8.955, It/sec 0.347 Iter 20: Train loss 8.317, It/sec 0.355 Iter 30: Train loss 8.073, It/sec 0.354 Iter 40: Train loss 7.848, It/sec 0.354 Iter 50: Train loss 7.815, It/sec 0.353 Iter 60: Train loss 7.689, It/sec 0.354 Iter 70: Train loss 7.665, It/sec 0.353 Iter 80: Train loss 7.639, It/sec 0.353 Iter 90: Train loss 7.557, It/sec 0.354 Iter 100: Train loss 7.580, It/sec 0.353 Iter 110: Train loss 7.525, It/sec 0.351 Iter 120: Train loss 7.545, It/sec 0.350 Iter 130: Train loss 7.461, It/sec 0.349 Iter 140: Train loss 7.392, It/sec 0.349 Iter 150: Train loss 7.413, It/sec 0.353 Iter 160: Train loss 7.398, It/sec 0.350 Iter 170: Train loss 7.417, It/sec 0.351 Iter 180: Train loss 7.332, It/sec 0.350 Iter 190: Train loss 7.324, It/sec 0.351 (.venv) user@box Transformer_lm % python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.957, It/sec 1.419 Iter 20: Train loss 8.349, It/sec 1.419 Iter 30: Train loss 8.095, It/sec 1.402 Iter 40: Train loss 7.817, It/sec 1.423 Iter 50: Train loss 7.826, It/sec 1.438 Iter 60: Train loss 7.674, It/sec 1.438 Iter 70: Train loss 7.662, It/sec 1.439 Iter 80: Train loss 7.595, It/sec 1.435 Iter 90: Train loss 7.496, It/sec 1.430 Iter 100: Train loss 7.563, It/sec 1.426 Iter 110: Train loss 7.494, It/sec 1.425 Iter 120: Train loss 7.496, It/sec 1.440 Iter 130: Train loss 7.432, It/sec 1.442 Iter 140: Train loss 7.426, It/sec 1.442 Iter 150: Train loss 7.337, It/sec 1.441 Iter 160: Train loss 7.384, It/sec 1.441 Iter 170: Train loss 7.433, It/sec 1.443

(.venv) user@box mnist % python main.py Epoch 0: Test accuracy 0.866, Time 0.097 (s) Epoch 1: Test accuracy 0.896, Time 0.088 (s) Epoch 2: Test accuracy 0.917, Time 0.097 (s) Epoch 3: Test accuracy 0.928, Time 0.091 (s) Epoch 4: Test accuracy 0.935, Time 0.089 (s) Epoch 5: Test accuracy 0.938, Time 0.087 (s) Epoch 6: Test accuracy 0.946, Time 0.088 (s) Epoch 7: Test accuracy 0.941, Time 0.089 (s) Epoch 8: Test accuracy 0.950, Time 0.086 (s) Epoch 9: Test accuracy 0.948, Time 0.088 (s) (.venv) user@box mnist % python main.py --gpu Epoch 0: Test accuracy 0.866, Time 0.241 (s) Epoch 1: Test accuracy 0.896, Time 0.136 (s) Epoch 2: Test accuracy 0.917, Time 0.141 (s) Epoch 3: Test accuracy 0.928, Time 0.135 (s) Epoch 4: Test accuracy 0.935, Time 0.134 (s) Epoch 5: Test accuracy 0.938, Time 0.152 (s) Epoch 6: Test accuracy 0.946, Time 0.136 (s) Epoch 7: Test accuracy 0.941, Time 0.135 (s) Epoch 8: Test accuracy 0.950, Time 0.136 (s) Epoch 9: Test accuracy 0.948, Time 0.135 (s)

m2pro 32gb

bigsnarfdude avatar Dec 17 '23 19:12 bigsnarfdude

Thanks @bigsnarfdude!

Transformer_lm % python main.py --gpu: M1Pro 32BG - It/sec 1
M2Pro 32BG - It/sec 1.4

Someone with M3 please run the test :)

ivanrylov avatar Dec 17 '23 20:12 ivanrylov

M1 Ultra 128GB

transformer_lm % python main.py Training a transformer with 153.883 M parameters Iter 10: Train loss 8.963, It/sec 0.347 Iter 20: Train loss 8.379, It/sec 0.354 Iter 30: Train loss 8.075, It/sec 0.354 Iter 40: Train loss 7.915, It/sec 0.355 Iter 50: Train loss 7.818, It/sec 0.354 Iter 60: Train loss 7.747, It/sec 0.354 Iter 70: Train loss 7.671, It/sec 0.354 Iter 80: Train loss 7.592, It/sec 0.354 Iter 90: Train loss 7.550, It/sec 0.354 Iter 100: Train loss 7.529, It/sec 0.354

transformer_lm % python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.880, It/sec 3.711 Iter 20: Train loss 8.277, It/sec 4.034 Iter 30: Train loss 8.052, It/sec 4.043 Iter 40: Train loss 7.922, It/sec 4.040 Iter 50: Train loss 7.737, It/sec 4.031 Iter 60: Train loss 7.799, It/sec 4.035 Iter 70: Train loss 7.627, It/sec 4.036 Iter 80: Train loss 7.658, It/sec 4.042 Iter 90: Train loss 7.589, It/sec 4.037 Iter 100: Train loss 7.578, It/sec 4.035

mnist % python main.py
Epoch 0: Test accuracy 0.866, Time 0.089 (s) Epoch 1: Test accuracy 0.896, Time 0.080 (s) Epoch 2: Test accuracy 0.917, Time 0.091 (s) Epoch 3: Test accuracy 0.928, Time 0.089 (s) Epoch 4: Test accuracy 0.935, Time 0.097 (s) Epoch 5: Test accuracy 0.938, Time 0.089 (s) Epoch 6: Test accuracy 0.946, Time 0.084 (s) Epoch 7: Test accuracy 0.941, Time 0.092 (s) Epoch 8: Test accuracy 0.950, Time 0.089 (s) Epoch 9: Test accuracy 0.948, Time 0.091 (s)

mnist % python main.py --gpu Epoch 0: Test accuracy 0.866, Time 0.277 (s) Epoch 1: Test accuracy 0.896, Time 0.184 (s) Epoch 2: Test accuracy 0.917, Time 0.185 (s) Epoch 3: Test accuracy 0.928, Time 0.180 (s) Epoch 4: Test accuracy 0.935, Time 0.182 (s) Epoch 5: Test accuracy 0.938, Time 0.179 (s) Epoch 6: Test accuracy 0.946, Time 0.179 (s) Epoch 7: Test accuracy 0.941, Time 0.182 (s) Epoch 8: Test accuracy 0.950, Time 0.182 (s) Epoch 9: Test accuracy 0.948, Time 0.180 (s)

chimezie avatar Dec 18 '23 18:12 chimezie

transformer_lm % python main.py --gpu

M1Pro 32GB - It/sec 1
M2Pro 32GB - It/sec 1.4
M1Ultra    - It/sec 4.004

bigsnarfdude avatar Dec 18 '23 18:12 bigsnarfdude

Knowing #GPU cores would be useful

easp avatar Dec 21 '23 00:12 easp

Here's silly question; where's main.py ? And which model you used for the test?

katopz avatar Dec 21 '23 02:12 katopz

Here's silly question; where's main.py ? And which model you used for the test?

https://github.com/ml-explore/mlx-examples/blob/main/transformer_lm/main.py

ivanrylov avatar Dec 22 '23 09:12 ivanrylov

MacBook-Pro-M3-Max-128:transformer_lm$ python main.py Training a transformer with 153.883 M parameters Iter 10: Train loss 9.016, It/sec 0.463 Iter 20: Train loss 8.450, It/sec 0.472 Iter 30: Train loss 8.136, It/sec 0.471 Iter 40: Train loss 7.989, It/sec 0.471 Iter 50: Train loss 7.869, It/sec 0.472 Iter 60: Train loss 7.763, It/sec 0.456 Iter 70: Train loss 7.694, It/sec 0.470 Iter 80: Train loss 7.644, It/sec 0.470 Iter 90: Train loss 7.591, It/sec 0.469 Iter 100: Train loss 7.593, It/sec 0.470

MacBook-Pro-M3-Max-128:transformer_lm$ python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.954, It/sec 2.661 Iter 20: Train loss 8.365, It/sec 2.941 Iter 30: Train loss 8.066, It/sec 2.940 Iter 40: Train loss 7.904, It/sec 2.938 Iter 50: Train loss 7.802, It/sec 2.937 Iter 60: Train loss 7.749, It/sec 2.918 Iter 70: Train loss 7.673, It/sec 2.834 Iter 80: Train loss 7.578, It/sec 2.770 Iter 90: Train loss 7.608, It/sec 2.729 Iter 100: Train loss 7.548, It/sec 2.688

[16-Core / 40-Core]

dletendre-hc avatar Dec 22 '23 21:12 dletendre-hc

Note those were on mlx v0.0.5. I just upgraded to 0.0.6 and it's a bit faster:

MacBook-Pro-M3-128:transformer_lm$ python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.982, It/sec 2.902 Iter 20: Train loss 8.337, It/sec 2.957 Iter 30: Train loss 8.084, It/sec 2.957 Iter 40: Train loss 7.940, It/sec 2.952 Iter 50: Train loss 7.821, It/sec 2.943 Iter 60: Train loss 7.711, It/sec 2.937 Iter 70: Train loss 7.684, It/sec 2.937 Iter 80: Train loss 7.585, It/sec 2.938 Iter 90: Train loss 7.564, It/sec 2.909 Iter 100: Train loss 7.571, It/sec 2.847

dletendre-hc avatar Dec 22 '23 21:12 dletendre-hc

transformer_lm % python main.py --gpu

M1Pro 32GB - It/sec 1
M2Pro 32GB - It/sec 1.4
M1Ultra    - It/sec 4.004
M3 128GB   - It/sec 2.9

ivanrylov avatar Dec 23 '23 09:12 ivanrylov

M2Max 64Gb (12 core CPU, 38 core GPU)

python main.py Training a transformer with 153.883 M parameters Iter 10: Train loss 8.829, It/sec 0.344 Iter 20: Train loss 8.292, It/sec 0.351 Iter 30: Train loss 8.083, It/sec 0.351 Iter 40: Train loss 7.908, It/sec 0.351 Iter 50: Train loss 7.789, It/sec 0.345 Iter 60: Train loss 7.762, It/sec 0.345 Iter 70: Train loss 7.679, It/sec 0.350 Iter 80: Train loss 7.621, It/sec 0.350 Iter 90: Train loss 7.579, It/sec 0.350 Iter 100: Train loss 7.549, It/sec 0.351

python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.892, It/sec 2.432 Iter 20: Train loss 8.284, It/sec 2.035 Iter 30: Train loss 8.019, It/sec 2.033 Iter 40: Train loss 7.944, It/sec 2.048 Iter 50: Train loss 7.791, It/sec 2.015 Iter 60: Train loss 7.737, It/sec 2.038 Iter 70: Train loss 7.657, It/sec 2.033 Iter 80: Train loss 7.593, It/sec 2.026 Iter 90: Train loss 7.538, It/sec 2.008 Iter 100: Train loss 7.572, It/sec 2.005

brendankntb avatar Dec 26 '23 04:12 brendankntb

transformer_lm % python main.py --gpu

M1Pro 32GB - It/sec 1
M2Pro 32GB - It/sec 1.4
M1Ultra    - It/sec 4.004
M3 128GB   - It/sec 2.9
M2Max 64Gb - It/sec 2

ivanrylov avatar Dec 28 '23 08:12 ivanrylov

M3 Pro 36 GB (11 Core CPU, 14 Core GPU) - MLX v0.0.6

python main.py --gpu Training a transformer with 153.883 M parameters Iter 10: Train loss 8.936, It/sec 1.043 Iter 20: Train loss 8.347, It/sec 1.072 Iter 30: Train loss 8.058, It/sec 1.075 Iter 40: Train loss 7.937, It/sec 1.069 Iter 50: Train loss 7.868, It/sec 1.069 Iter 60: Train loss 7.693, It/sec 1.071 Iter 70: Train loss 7.694, It/sec 1.071 Iter 80: Train loss 7.613, It/sec 1.074 Iter 90: Train loss 7.618, It/sec 1.074 Iter 100: Train loss 7.504, It/sec 1.077

shivamraval98 avatar Dec 29 '23 02:12 shivamraval98

These MLX numbers are looking close to same performance slope here https://github.com/ggerganov/llama.cpp/discussions/4167

bigsnarfdude avatar Jan 01 '24 00:01 bigsnarfdude