llama4micro
llama4micro copied to clipboard
Investigate audio output 🔈
See if there is a way to run text to speech and read the generated text out loud.
Ideally, this happens in parallel to the LLM generating tokens (using the second CPU core).
Potentially useful references: