Sting Zhang

Results 4 comments of Sting Zhang

You cant run llama 3.2 with tinygrad yet, currently it only support llama 3.1 and llama 3

uhh mlx is for Apple only, Apple silicon not even intel Apple. MLX Supports every model and those model are quantized in 4bit, model for linux currently is only Llama...

I can run exo on my WSL2 Ubuntu 22.03 LTS with 3070ti Laptop but I just can't get the cluster working properly