client
client copied to clipboard
Only store channel if use_cached_channel is true
The gRPC client exposes a parameter that disables reusing a cached channel: use_cached_channel
.
Setting this to false doesn't stop the channel (and mock) from being inserted to the internal channel cache. The problem with it being inserted to the map means the channel will only get destroyed once it's being replaced by another channel.
In our case we want the channel (and associated connection) to be closed as soon as the client using it is getting destroyed though, meaning we don't want to reuse the channel at all. I believe this is also consistent with the description for use_cached_channel
:
use_cached_channel If false, a new channel is created for each /// new client instance. When true, re-use old channels from cache for new /// client instances. The default value is true
@Tabrizian would you mind having a look at this? We have to keep an internal fork because of this behaviour. I already have the CLA signed (for my organisation Speechmatics)
@debermudez is this something that you could help with reviewing?
Yea. I will make a ticket to track this and look at it next week. On a surface level, it seems straightforward enough ( I say hoping not to curse my future self ).
@HennerM I have not forgotten about this, but was unfortunately sidetracked. I am trying to review this today. In the meantime, have you signed the CLA: https://github.com/triton-inference-server/server/blob/main/CONTRIBUTING.md#contributor-license-agreement-cla We need a signed CLA to accept any code contributions.
I completely missed your second message. Nevermind my ask.
@HennerM Wanted to give you a quick update. This is actively being tested via this PR: https://github.com/triton-inference-server/client/pull/465 If it passes, we should be able to get this merged shortly after.
@debermudez Thanks, is this still being worked at?
I'm also looking forward to having the changes proposed by @HennerM being integrated to this repo, as this is also a need we recently had ! 👍
@Tabrizian did this get lost?
Sorry no it did not get lost. I got pulled off for something else. I have handed this off to another team mate: @jbkyang-nvi
Hello. Looking into this now. As an FYI, this will be added to the 24.05 release instead of 24.04 since it was not prioritized for this release.
Hello. Looking into this now. As an FYI, this will be added to the 24.05 release instead of 24.04 since it was not prioritized for this release.
Thanks! That's okay, we waited already some time, the one month is not going to make a big difference :)
Merging with https://github.com/triton-inference-server/client/pull/465 and tested with https://github.com/triton-inference-server/server/pull/7123 after pre-commit passes
Thanks for getting this merged!