Server v2
Context
Gives the reviewer some context about the work and why this change is being made. Focus on the WHY from a product perspective.
Description
Provide a detailed description of how this task will be accomplished. Include technical details, steps, service integration, job logic, implementation, etc.
Changes in the codebase
Describe the functionality you are adding or modifying, as well as any refactoring or improvements to existing code.
Changes outside the codebase
Explain any changes to external services, infrastructure, third-party integrations, or database updates.
Additional information
Provide any extra information that might be useful to the reviewer, such as performance considerations or design choices.
Checklist
- [ ] Tests added/updated
- [ ] Documentation updated
- [ ] Issue referenced (e.g., Closes #123)
Thanks for add the stt endpoint and making them openai compatible! Let me know how I can help, would love to get involved, this is the missing piece of my mac setup.
Thanks for add the stt endpoint and making them openai compatible! Let me know how I can help, would love to get involved, this is the missing piece of my mac setup.
My pleasure @arty-hlr I'm happy you like it.
How do you plan on using it?
@Blaizzy In openwebui for TTS/STT especially voice calling, and later I'd like to make it into a Jarvis-like assistant. Also, local notebooklm-like podcast generation would be great.
@Blaizzy In openwebui for TTS/STT especially voice calling, and later I'd like to make it into a Jarvis-like assistant. Also, local notebooklm-like podcast generation would be great.
Awesome!
Both of which we are working on as well.
Would you like to contribute towards building a local Jarvis and NotebookLM?
@Blaizzy In openwebui for TTS/STT especially voice calling, and later I'd like to make it into a Jarvis-like assistant. Also, local notebooklm-like podcast generation would be great.
Awesome!
Both of which we are working on as well.
Would you like to contribute towards building a local Jarvis and NotebookLM?
@Blaizzy Yup, let me know how I can help!
@Blaizzy Yup, let me know how I can help!
At this stage, we need help testing the server and the UI #154
You can suggest changes and send PRs that will bring us closer to goal
@Blaizzy Maybe you can help, unfortunately I am unable to test this branch, I did python setup.py install in the pc/ui-v2 branch.
It seems to go back to https://github.com/Blaizzy/mlx-audio/pull/153/commits/f6f5c0257ca5308a24f4687a47049035c17c9751#diff-3a24cb048f4f253ee17d9ba1292691a376ba9a2e697a6c296375c59a58be3e0dL367 where the run function and
if __name__ == "__main__":
run()
were removed.
Unfortunately the commit message "fix server" doesn't help there. can you elaborate on why those functions were removed? I don't see any tests for running mlx_audio.server after installing so I'm assuming maybe it wasn't checked? Hopefully it's as simple as adding them back
Uhh, I see.
Try:
uvicorn mlx_audio.server:app
I will try to figure out how to get mlx_audio.server to work after the new UI is done
That worked, thanks @Blaizzy!
The /v1/models endpoint doesn't return any models yet, but I'm guessing that's not implemented yet maybe?
My pleasure!
The /v1/models endpoint doesn't return any models yet, but I'm guessing that's not implemented yet maybe?
It returns models currently loaded into memory.
Just make some requests with different models and then try to make a request and you will see the models you used that are warm and ready to run.
Ah! I'm not sure that's consistent with other local tools, for example ollama and lmstudio show all downloaded/available models on /v1/models, not only the ones loaded in memory, so that the user can choose which one to use/load. Or what do you think @Blaizzy?
I can do that.