LLM-VM
LLM-VM copied to clipboard
irresponsible innovation. Try now at https://chat.dev/
Closes https://github.com/anarchy-ai/LLM-VM/issues/152
Keep track of detailed information about inference, data synthesis, response sizes, latency, quality
we're starting to add lots of parameters for configuration of LLM invocation for data synthesis and the agents, these need to be documented and sanity checked
Extend context window using disc storage\RAM\cache\other
it would be great to enable users to easily fine-tune base models for their unique downstream tasks