Eshaan Agarwal
Eshaan Agarwal
Hey ! is there any update on this ? I really need it for a project right now. It would be great if you could provide me some direction.
> For now, everything runs completely on the CPU. > > > 2. Do we have GPU support for the above models. > > It's a work-in-progress at this stage....
> Hey I will provide the details in some time. I haven't tried threading. Can you please give a sample code or way to run it or perform it ?...
hi @Panquesito7 i would love to work on this. I just neede some references for writing the requirements on Foundry test file. Can you please give directions for that. Also...
How can i run it on a different port ? Can you please guide me
Hi I am facing the issue of our of memory for context while using gpt4all model 1.3 groovy with a 32 cpu 512 gb ram model using cpu inference.
Hey i was trying to run this on a RHEL 8 server with 32 cpu cores. and i am getting the same error. On my second query. I am using...
Hi @ggerganov @gjmulder I would appreciate some direction for this pls.
Hi I was trying GPT4all 1.3 groovy model and i faced the same issue. i am not able to understand why this is happening, Can anybody provide me with some...
> @eshaanagarwal the only "solution" that I found was a reboot. Since rebooting is not an option I had to switch to different models. For me all 30B/33B LLM models...