Does MilvusClientV2Pool always work if milvus servers restart after crash for a long time
I always cache the MilvusClientV2Pool when it is created, should I set expired time fot it ?
In my project, getClient always return null since I used the ClientPool。
Now, I restart my application to fix it , so I think ClientPool may have some feature I do not notice.
The PoolConfig defines the behavior of the pool:
PoolConfig poolConfig = PoolConfig.builder()
.maxIdlePerKey(10) // max idle clients per key
.maxTotalPerKey(20) // max total(idle + active) clients per key
.maxTotal(100) // max total clients for all keys
.maxBlockWaitDuration(Duration.ofSeconds(5L)) // getClient() will wait 5 seconds if no idle client available
.minEvictableIdleDuration(Duration.ofSeconds(10L)) // if number of idle clients is larger than maxIdlePerKey, redundant idle clients will be evicted after 10 seconds
.build();
When you call pool.getClient(), if there is an idle client in the pool, it will return the idle client for you. If the milvus server is down. the connection is broken, you will get error when you call interface of the client object. Then you use pool.returnClient() to return the client to pool, the pool will validate it by calling client.clientIsReady(). Since the connection is broken, the clientIsReady() returns false then the client will be destroyed after a while. If all the invalid clients have been destroyed, you restart your milvus, and call pool.getClient() again, it will create a new client to connect to the milvus.
So, even if milvus server broken and restart, the MilvusClientV2Pool instance cached by the client is still useful ?
I just tested with the following steps:
- start a milvus server
- pool.getClient() to new a client to do something
- shutdown the milvus server
- pool.getClient() to get the same client to do something, the client will hang forever if you didn't set rpcDeadlineMs, or time out if you have set rpcDeadlineMs.
Another tets:
- start a milvus server
- pool.getClient() to new a client to do something
- restart the milvus server
- pool.getClient() to get the same client to do something, it works fine
So, I think the answer is yes. If milvus server is broken and restarted, the MilvusClientV2Pool instance cached by the client is still useful.
The rpc channel is managed by grpc lib, I think the behavior is based on the grpc lib.