rill icon indicating copy to clipboard operation
rill copied to clipboard

Ability to use local LLMs for privacy.

Open dstruck opened this issue 11 months ago • 3 comments

If the use of OpenAI is not possible due to privacy concerns, it would be beneficial to have the option to utilize a locally installed language model.

One possible solution is to allow for customization of the OpenAI URL and chosen model, making it feasible to utilize alternatives such as Ollama that can mimic the OpenAI API.

Alternatively, employing the Ollama API directly could offer access to its complete capabilities.

dstruck avatar Mar 04 '24 15:03 dstruck

Agree, that could absolutely be useful.

One thing to note in regards to privacy is that no actual data is being sent over the wire to either rill or OpenAI, only metadata such as data types and column names (realising column names can be sensitive).

mindspank avatar Mar 04 '24 15:03 mindspank

I guessed that Rill would only send the metadata, e.g. the schema to OpenAI ;-) However also metadata leaks sensitive information, like what kind of systems are deployed and in the case of custom systems what kind of data is stored.

Moreover, it could be interesting to explore the possibility of utilizing various language models like Mixtral, Gemini or Llama to compare and identify which one excels in creating an initial dashboard. There exists for example a LLM model specifically tuned to generate SQL: https://huggingface.co/defog/sqlcoder (available in Ollama).

dstruck avatar Mar 04 '24 15:03 dstruck

@nishantmonu51 Something to discuss for us when you are back. I would guess we need a hard switch on project level to ensure runtimes start with correct model url and api keys.

mindspank avatar Jun 16 '24 22:06 mindspank