aerospike-client-go
aerospike-client-go copied to clipboard
About ClientPolicy.MinConnectionsPerNode vs server's proto-fd-idle-ms
Client
- Go 1.15.3
- Aerospike: v4.5.0
Issue
Here is the drop-idle-connections logic.
https://github.com/aerospike/aerospike-client-go/blob/b936c147db0d7ac5f881c3bde9df87e9a6829dfa/connection_heap.go#L237
For example, if we set ClientPolicy.MinConnectionPerNode to 5, these 5 idle connections may have no chance to check if it is idle or already timeout. Of course, This case is under not busy Aerospike access scenario.
For these idle conns, if it already exceeds server's proto-fd-idle-ms, Aerospike server may proactively close the conn and client conn hasn't realized that. Then the client will get such error
Node BB9353C99FXXX 10.y.y.y:3000: write tcp 10.x.x.x:46554->10.y.y.y:3000: write: broken pipe
Question
Is it possible to fully iterate the conns pool and clean up the idle ones?
You being on v1.x is the bigger news to me :) The assumption has been that those connections are dropped and the command is usually retried. Is that not the case?
Yes, it's retried.
Let me provide more details:
- For the broken pipe log, it's from this line https://github.com/aerospike/aerospike-client-go/blob/b936c147db0d7ac5f881c3bde9df87e9a6829dfa/command.go#L2169
- The command is Get()
func (clnt *Client) Get(policy *BasePolicy, key *Key, binNames ...string) (*Record, error) {
- The command policy uses default values: MaxRetries=2, SleepBetweenRetries: 1ms Because I set the conn-pool to 5, and initial-execute and retried-execute are total of 3 times, I get 3 error logs and each log interval is 1ms:
I0520 02:44:42.754393 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y.y.y:3000: write tcp 10.x.x.x:57552->10.y.y.y:3000: write: broken pipe
I0520 02:44:42.755625 command.go:2169] [aerospike] Node BB90D6223B88B0A 10.y2.y2.y2:3000: write tcp 10.x.x.x:54522->10.y2.y2.y2:3000: write: broken pipe
I0520 02:44:42.756857 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y3.y3.y3:3000: write tcp 10.x.x.x:33098->10.y3.y3.y3:3000: write: broken pipe
Then the last write: broken pipe
is returned by Client.Get().
@khaf any further comment?
I think I understand the issue, I just need to implement the solution. ETA this week.
@khaf thanks so much. Looking forward to good news.
@khaf any update on this thread?
We recently encountered another issue because of this.
Issue
As cluster has 48 nodes. One client sets ClientPolicy.MinConnectionsPerNode to 100. So the expected connections to as cluster should be 4800. And we foundation nodeStats.ConnectionsOpen is 4800.
But by using netstat -apn | grep ":3000" | wc -l
, it shows only about ~1300.
Analysis
We set min-conns to handle peak traffic. For general traffic, we don't need so many conns, and partial conns will be idle-timeout. But these partial conns will have no chance to be closed, even if idle-timeout, because of min-conns. Only for the next traffic peak, these conns will be touched, but maybe with aerospike access error or trigger creating many new conns.
Proposal
- Does it work to simply iterate all heaps to drop idle-timeout conns? or Is it a good option?
func (h *connectionHeap) DropIdle() {
for i := 0; i < len(h.heaps); i++ {
for h.heaps[i].DropIdleTail() {
}
}
}
Sorry another high priority issue popped up that I had to take care of. I'll release the fix to this issue tomorrow.
@khaf thanks for the update. We are using both v4.x and v5.x clients. Looking forward to the release.
I haven't forgotten about this ticket. The solutions that I came up with this week have not been satisfactory. Since this is an important feature of the client and I'll have to port it forward to the newer versions (and quite possibly to other clients as well), I need to take my time. But this is my current active issue, so will solve it before moving to other work. Tell me, are you using Go modules yet? Should I make a v4 branch with module support for the client?
@khaf Yes, we're using Go mod. BTW, it seems v4 branch currently works well with Go mod.
OK, after writing and testing the code for this for the client, it turned out in an engineering meeting that the correct resolution for this issue is to set the server idle time to 0 to avoid reaping the connections from the server side. Please let me know how that goes.
Closing this. Feel free to reopen or issue a new ticket in case you had further question.
@khaf We just reencountered this issue recently. Previously we just disabled conn-pool in not-so-busy environments.
I understand the idea of setting the server idle timeout to 0. But there is a challenge that our Aerospike cluster is shared by multi-services and multi-team, which is hard to make sure each service/team closes the idle connections duly, so as to use system resources efficiently.
Is it possible to re-evaluate the solution?
BTW, if it still prefers to current solution, is it possible to add a comment to IdleTimeout or MinConnectionsPerNode, to avoid other misunderstandings?
Thanks
I reopened the issue, so that I can track it. I don't know if the product or engineering will accept any change or addition to the current solution, but I'll bring it up.
For the comments on ClientPolicy
, what would you like to be added for clarification?
ps: What version of the client are you currently on?
@khaf sorry for the late reply.
For the comment, like When MinConnectionsPerNode is enabled, server side must set the idle time to 0
And my APP version is v4.5.0.
@khaf any update for this thread?
Any update here? I'm running client 6.13 and see number of connections drop below MinConnectionsPerNode. Server CE 6.3.0.1 with proto-fd-idle-ms set to the default of 0. Connections drop too low and when traffic spikes my requests timeout.