local-ai-packaged icon indicating copy to clipboard operation
local-ai-packaged copied to clipboard

[FEATURE] Hardware requirements

Open jay377 opened this issue 10 months ago • 4 comments

Can we please confirm the hardware requirements perhaps with reasons? I am uncertain if the 8gig ram is only needed for running the recommended local models and not for the apps thus lets say if you where only running a tiny model and mostly other cloud API's for LLM I would then be fine to host this package effectively on a 2 gig (what is min when disregarding models?) or 4gig ram VPS? Thanks

jay377 avatar Jun 19 '25 18:06 jay377

Agreed, this would be very useful to know and understand. Also, how can all this be hosted on a VPS without any GPU power behind it? The LLM models must be only the small size basic ones, am I right?

Macca138 avatar Jul 27 '25 11:07 Macca138

Agreed, this would be very useful to know and understand. Also, how can all this be hosted on a VPS without any GPU power behind it? The LLM models must be only the small size basic ones, am I right?

Im actually running it now with everything turned on on an epic 16gb ram 6vcu from webdock for $7pm and the resource usage is about 6gb, so you need 8 at least but then theres no space for the local llms if you want to use those so 16gb seems the min for local llms running. Webdock is excellent and has some great 50% off deals, check pricing page.

jay377 avatar Jul 27 '25 14:07 jay377

Thanks Jay. That’s good to know. But the LLM models are tiny really aren’t they? So you’re going to be limited somewhat to the depth of work you can get agents to do right?

Macca138 avatar Jul 27 '25 15:07 Macca138

Thanks Jay. That’s good to know. But the LLM models are tiny really aren’t they? So you’re going to be limited somewhat to the depth of work you can get agents to do right?

Yeah exactly thats right, these are the 3b to 8b small LLM models that can work on CPU, they work fine for some basic things but personally i dont use local llms in this package now but the vps was cheap enough for me to leave the option open if i wanted to do some kind of simple local llm processing for my new saas later.

But for the half decent 8B local models that can actually do stuff i think you need 8gb ram additional to what your vps is using, so with 16gb that leaves me at 14gb used to use the package fully with a local model runnin "free" and then some buffer room.

But if you dont need weak local models (Gemini has great free tiers to do over 1500 requests per day combined for free, even 2.5 pro which is excellent has 100 free requests per day now on api) then you only need an 8gb vps to run the whole package well. I think that is what Cole recommended in one of the videos also.

jay377 avatar Jul 27 '25 16:07 jay377