Feature Request: Local server and system instruction presets etc.
Thanks for your work, the elegant interface design and lightweight client are great.
I have a few new feature suggestions:
- Can start an OpenAI API compatible server locally, so that the AI ​​code plugin of vs code can be used through ChatMLX.
- Can customize a set of system instruction presets, so that you can easily switch the role that the model needs to play.
- Can customize the model save directory. After all, the models are large and suitable for storage on an external SSD
Thank you for your suggestions. The suggestion 3 is currently in development. Suggestions 1 and 2 are also being planned.
In addition, report a few issues:
- If you delete a message when the model is replying to it, will crash.
- When the conversation list on the left column is cleared, if you send a message to the model, will crash.
- mlx-community/gemma-2-9b-it-4bit has an extra string
<end_of_turn>at the end.
i second all of these. I really like the look and feel of this but I can't use it atm due to it's limitations. And for some reason manually copying over the model to the system's download folders doesn't work. so i can't use predownloaded models, only what chatmlx provides. Hope we can get some of these features soon, really liking everything else so far.