LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

feature: return count of used tokens

Open mudler opened this issue 2 years ago • 6 comments

Currently we don't count the tokens produced - although should be fairly easy now to implement it on the gRPC server level for all the backends.

I think this is a great good first issue for those that want to start contributing to the project

mudler avatar Jul 25 '23 18:07 mudler

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful :smile_cat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

The issue seems to be related to the LocalAI open source project and its GRPC implementation. It appears that the project is using grpc to create a server process, but there are some issues with the address and port allocation. The error message suggests that the process is not able to allocate a free port, which could be caused by a conflict with other processes or network issues.

To help debug this issue, we can check if the grpc process is running and try to identify any errors in the logs related to the address and port allocation. We can also check if there are any conflicts with other processes or network issues that could be causing the problem. Additionally, we can try restarting the process and checking if it starts successfully after a fresh attempt.

If the issue persists, we may need to investigate further and provide additional debugging steps or suggest potential solutions based on the specific implementation of the GRPC server in the project.

Sources:

  • https://github.com/go-skynet/LocalAI/tree/master/pkg/grpc/proto/backend_grpc.pb.go
  • https://github.com/go-skynet/LocalAI/tree/master/pkg/model/initializers.go
  • https://github.com/go-skynet/LocalAI/tree/master/Makefile

localai-bot avatar Jul 25 '23 19:07 localai-bot

I've been interested in some memory usage / model unloading stuff lately that will require modifying the gRPC servers. Commenting on this issue to remember to keep it in mind to see if it ends up related to this.

dave-gray101 avatar Jul 26 '23 22:07 dave-gray101

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

The issue seems to be related to the LocalAI open source project and its GRPC implementation. It appears that the project is using grpc to create a server process, but there are some issues with the address and port allocation. The error message suggests that the process is not able to allocate a free port, which could be caused by a conflict with other processes or network issues.

To help debug this issue, we can check if the grpc process is running and try to identify any errors in the logs related to the address and port allocation. We can also check if there are any conflicts with other processes or network issues that could be causing the problem. Additionally, we can try restarting the process and checking if it starts successfully after a fresh attempt.

If the issue persists, we may need to investigate further and provide additional debugging steps or suggest potential solutions based on the specific implementation of the GRPC server in the project.

Sources:

* https://github.com/go-skynet/LocalAI/tree/master/pkg/grpc/proto/backend_grpc.pb.go

* https://github.com/go-skynet/LocalAI/tree/master/pkg/model/initializers.go

* https://github.com/go-skynet/LocalAI/tree/master/Makefile

It is hallucinating about address and port allocation in GRPC.. which is not connected to this issue, right?

JackBekket avatar Feb 14 '24 10:02 JackBekket

Currently we don't count the tokens produced - although should be fairly easy now to implement it on the gRPC server level for all the backends.

I think this is a great good first issue for those that want to start contributing to the project

I would like to get more info about this issue and places where to look at?

JackBekket avatar Feb 14 '24 10:02 JackBekket

https://github.com/mudler/LocalAI/blob/master/pkg/grpc/server.go#L140

Is it relevant?

JackBekket avatar Feb 14 '24 10:02 JackBekket

@mudler

Can you clarify this? I want to take this issue but do not know where to look

JackBekket avatar Mar 07 '24 12:03 JackBekket