JackBekket

Results 29 comments of JackBekket

> ## ⚠️⚠️⚠️⚠️⚠️ > > _Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!_ > > _but.......

> Currently we don't count the tokens produced - although should be fairly easy now to implement it on the gRPC server level for all the backends. > > I...

https://github.com/mudler/LocalAI/blob/master/pkg/grpc/server.go#L140 Is it relevant?

@mudler Can you clarify this? I want to take this issue but do not know where to look

Can I add something like api_keys.json and make local-ai check this file for the keys?

https://github.com/mudler/LocalAI/issues/1981 is related you get this error because llama-cpp backend tries to offload whole model to GPU and fail because you have not enough VRAM Workaround might be if you...

> I have this error with a custom model NeuralHermes. I have asked for help #1992 Have you checked that your VRAM is enough to offload all layers? you can...

@rahullenkala I has tried this example and it's worked in different networks, but I has to wait about 5 minutes to get resolved

Yep, will do Also, can you comment on this https://github.com/mudler/LocalAI/issues/806 ? I would like to do it, but have no idea where should I start ? AI bot hallucinating in...