Results 13 comments of johnykes

> You cant run llama 3.2 with tinygrad yet, currently it only support llama 3.1 and llama 3 what about exo --inference-engine mlx ? I have errors using both on...

Just tried this on both WSL and MacOS MacOS ``` ModuleNotFoundError: No module named 'mlx_lm.models.cahe' ``` WSL - ubuntu 22 ``` 29.52 [ 68%] Built target mlx 29.53 make[2]: ***...

> We resolved the issue through discord. The way to make it work is to set http://localhost:3000 in the allowed origin 'allowed origin' from which section? Can't find it anywhere