add imgs for llama_assist custom integration
Proposed change
Adds logo and icon for this custom integration : https://github.com/M4TH1EU/llama-assist
Type of change
- [ ] Add a new logo or icon for a new core integration
- [ ] Add a missing icon or logo for an existing core integration
- [X] Add a new logo or icon for a custom integration (custom component)
- [X] I've added a link to my custom integration repository in the PR description
- [ ] I've opened up a PR for my custom integration on the Home Assistant Python wheels repository
- [ ] Replace an existing icon or logo with a higher quality version
- [ ] Replace an existing icon or logo after a branding change
- [ ] Removing an icon or logo
Additional information
- This PR fixes or closes issue: fixes #
- Link to code base pull request:
- Link to documentation pull request:
- Link to integration documentation on our website:
- Link to custom integration repository:
Checklist
- [X] The added/replaced image(s) are PNG
- [X] Icon image size is 256x256px (
icon.png) - [X] hDPI icon image size is 512x512px for (
[email protected]) - [X] Logo image size has min 128px, but max 256px, on the shortest side (
logo.png) - [X] hDPI logo image size has min 256px, but max 512px, on the shortest side (
[email protected])
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks :+1:
Hi Frank,
Thanks for the fast feedback. I'm unsure about what you mean by "custom/modified" branding?
My integration has nothing to do with the official llama.cpp nor Facebook Llama models, it's just a wrapper for any OpenAI compatibles endpoint. I've updated my integration README to clarify this.
If I were to remove the HA logo and use the open source Figtree font family instead of Biotif would it be considered OK ?
OpenAI compatibles endpoint.
I guess than the open AI logos are expected. We want integration to carry to branding of the device/service integrated.
../Frenck
I see your point, but I really don't think this applies in this case. The goal of this integration is specifically to avoid using OpenAI services but instead allowing the users to use any local LLMs with the backend of their choice, the only condition being that it supports an OpenAI-like API (/v1/completions) endpoint.
I used the name "Llama" to make it more recognisable for users as a lot of LLM-related tools also use this Llama prefix, and well "Assist" because that's the name of HA's assistants.
Ok makes sense 👍
PS: Feel free to mark it ready for review when you are ready. As right now it is sitting in draft.
Because there hasn't been any activity on this PR for quite some time now, I've decided to close it for being stale.
Feel free to re-open this PR when you are ready to pick up work on it again 👍