Anthony

Results 110 comments of Anthony

I can't make sense of what you wrote. Perhaps you can link to a fork with debug statements and comments pointing to exactly what you're seeing.

That looks nice! This has always been difficult for me to get right.

The Qwen 2.5 models work correctly, so I would recommend using your fine-tuned model in the LLMEval app or llm-tool, which should produce correct results, and working backward from there.

The model is now available in the MLX Community on Hugging Face: https://huggingface.co/mlx-community/Chatterbox-TTS-fp16 https://huggingface.co/mlx-community/Chatterbox-TTS-8bit https://huggingface.co/mlx-community/Chatterbox-TTS-4bit You can try it like this with this branch of mlx-audio: ```shell mlx_audio.tts --model mlx-community/Chatterbox-TTS-4bit...

I'm closing this in favor of further development on [my own fork](https://github.com/DePasqualeOrg/mlx-audio-plus) of this repo.

The espeak-ng organization already has this Swift package, which I'm using in this PR: https://github.com/espeak-ng/espeak-ng-spm

> Well done @DePasqualeOrg! > > > @Blaizzy, we could also download the Kokoro [voices](https://huggingface.co/mlx-community/Kokoro-82M-bf16/tree/main/voices) from Hugging Face instead of bundling these heavy JSON files if they are uploaded as...

@Blaizzy, I added the voices to the Hugging Face repo in .safetensors format here: https://huggingface.co/mlx-community/Kokoro-82M-bf16/discussions/1 This will allow them to be downloaded in the Swift app instead of bundling converted...

Here's what the multi-platform app currently looks like on macOS and iOS:

The CI build test is failing because I've used some newer Swift syntax that requires iOS 18.4/macOS 15.4 or newer (specifically, `Atomic` and `isolated deinit`). These help a lot with...