"ramalama lightspeed" command
Feature request description
Connect our client to rhel lighspeed endpoint over TLS:
https://github.com/search?q=repo%3Arhel-lightspeed%2Fcommand-line-assistant+endpoint&type=code
just to achieve a basic chatbot, no complex functionality, just a basic chatbot that connects to the endpoint. Leave the enhanced features to:
https://github.com/rhel-lightspeed/command-line-assistant
Basically ramalama lightspeed should behave like ramalama run but it just connects to the lightspeed endpoint as a client.
Suggest potential solution
No response
Have you considered any alternatives?
No response
Additional context
No response
We should loop in @mairin @r0x0d etc. here
@ericcurtin, hi! Let me know how we can help with this.
If you need someone to implement that, we can sync in slack and I could try, just let me know.
@r0x0d
Just enhance the client code in RamaLama to talk to the lightspeed endpoint, it's just another openai endpoint at the end of the day, a reference implementation is here:
https://github.com/rhel-lightspeed/command-line-assistant
please feel free to take this on. We don't want to reimplement all the features of command-line-assistant, just the most basic one, speaking to lightspeed endpoint.
@ericcurtin can you assign the issue to me? I will take this task and implement it when I have some spare time.
@r0x0d any updates?
@rhatdan I believe the gist of this issue was that, currently, ramalama would need to implement the same authentication mechanism we have in CLA in order to communicate with our external service. It looks too much complex (in my opinion).
I agree, moving this to a discussion.