Pile icon indicating copy to clipboard operation
Pile copied to clipboard

Ideas around integrating local LLM

Open paramaggarwal opened this issue 9 months ago • 3 comments

Summary

Currently we expect customers to figure out how to get an OpenAI API key and then they need to configure billing etc over there. Plus people might not be comfortable sending their thoughts to OpenAI servers. If we could run a local LLM on the device itself then we could actively generate reflections without having to click a button explicitly.

One option that I feel can be very straightforward is this: https://ollama.ai/ and they have even documented this use-case here: https://ollama.ai/blog/llms-in-obsidian

paramaggarwal avatar Oct 29 '23 05:10 paramaggarwal

Running a local language model on a device does indeed offer advantages in terms of privacy and ease of use, especially for generating reflections without relying on external servers or requiring users to manage API keys and billing configurations. Services like Ollama.ai provide a promising avenue for deploying language models locally, which can be particularly advantageous in scenarios where users prioritize privacy or prefer not to rely on cloud-based solutions.

The ability to generate reflections seamlessly without explicit manual input could significantly enhance user experience and streamline the process of using language models for various applications, such as note-taking, idea generation, or personal reflection.

The use case you mentioned with Obsidian demonstrates the potential of integrating local language models with existing productivity tools, offering users the opportunity to enhance their workflows and creativity within familiar environments.

Overall, leveraging local language models through platforms like Ollama.ai presents a compelling option for providing users with on-device AI capabilities while addressing concerns around privacy and simplifying the user experience.

MSR-07 avatar Nov 28 '23 06:11 MSR-07

As I see Pile uses LlamaIndex and it does not support on-premise LLMs.

image

Would you consider using a different library? what is your suggestion for working around this? @UdaraJay

It would be amazing to implement this, most people already are running their llms local.

Kenan7 avatar Dec 31 '23 21:12 Kenan7

@Kenan7 I can see that LlamaIndex supports Ollama now. So, maybe this can be revisited and looked at. Would be great to spin up Ollama with a default model and access it on Ollama's port (11234?). Not much configuration required.

https://github.com/run-llama/LlamaIndexTS/blob/main/packages/core/src/llm/ollama.ts

balamenon avatar Apr 23 '24 05:04 balamenon