LLM-VM
LLM-VM copied to clipboard
Implement ephemeral store for prompts and responses
Implement ephemeral store to store prompts and response to extract telemetrics to purpose it to identify the quality of responses (including but not limited to)
I think we need more details but I like the direction this is going!
Hi, can I please work on this issue? Thank you!
Maybe we can use reddis? https://github.com/lucylililiwang/LLM-VM/blob/lucylililiwang-EphemeralStore/EphemeralStore.py