Prince Canuma

Results 572 comments of Prince Canuma

Thanks @zhutao100! Quick question, since we have mlx-lm as a dependency isn't there a simpler approach of doing this? For instance, check how mixed quant works.

Thanks a lot! Yes, I was using a custom RMSNorm, I changed it to `nn.RMSNorm` and it's 3.25x faster 🚀. When it comes to rope I was already using `nn.RoPE`...

@awni I made the recommended changes but I can't seem to be able to run training on my machine (M3 Max 96GB). It throws an error after processing 3 samples...

Thanks! Wow, that's really weird. Here is my setup: ``` prince_canuma@MacBook-Pro-3 ~ % pip list | grep mlx fastmlx 0.2.1 mlx 0.18.0 mlx-embeddings 0.0.1 /Users/prince_canuma/Documents/Projects/LLMs/mlx-embeddings mlx-lm 0.19.0 mlx-vlm 0.1.0 /Users/prince_canuma/Documents/Projects/LLMs/mlx-vlm...

Upgrading to v0.18.1 fixed it! 🚀 > Data loading is often the issue. And yes next release is quite reasonable.. just letting you know in case you didn't notice it....

> Also remind me what's your machine and OS? Macbook Pro 14-inch Chip: M3 Max URAM: 96GB OS: Sonoma 14.5

Awesome, thanks! > If you preload the dataset into RAM it probably isn't the IO > Do you do the preprocessing in MLX? If not, maybe try doing that so...

Thanks a lot, it's my pleasure! Yes, Zonos is on the way 🚀 Could you share this RVC + Kokoro example?

Indeed @chigkim that's a great suggestion! @lucasnewman is working on it 🚀