whereabouts icon indicating copy to clipboard operation
whereabouts copied to clipboard

Node Slice Fast IPAM

Open ivelichkovich opened this issue 10 months ago • 9 comments

What this PR does / why we need it:

improves performance with node slice mode

https://docs.google.com/document/d/1YlWfg3Omrk3bf6Ujj-s5wXlP6nYo4PZseA0bS6qmvkk/edit#heading=h.ehhncqtntm3t

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

Special notes for your reviewer (optional):

This is a very very rough draft to help guide the design and discussion

ivelichkovich avatar Apr 17 '24 04:04 ivelichkovich

NodeSlice CR is user to define and change...so hard to match the different runtime need of each node? not before the user is AI I guess. If we can view the current allocation as using a blockSize=1, we can expose this blockSize for user to define e.g. up to 8/16, which would greatly reduce the lease collision? and we will have node slice size based on need.

jingczhang avatar Apr 18 '24 19:04 jingczhang

NodeSlice CR is user to define and change...so hard to match the different runtime need of each node? not before the user is AI I guess. If we can view the current allocation as using a blockSize=1, we can expose this blockSize for user to define e.g. up to 8/16, which would greatly reduce the lease collision? and we will have node slice size based on need.

I'm not sure I fully understand. In this current version the user can define whatever slice size they need.

ivelichkovich avatar Apr 19 '24 14:04 ivelichkovich

Hi @ivelichkovich, sorry for not making my point clear. I meant to suggest not to limit a whereabouts node agent to only one network slice. Here are more details for your review: (1) Limiting a node agent to one network slice effectively remove the need for "lease lock", since the locking will always be successful (2) We can use the existing "lease lock" workflow for the node agent to require access to other network slice (not full) when its primary slice is full. (3) When a new node added, it can take a free network slice (not assigned to any node yet).

jingczhang avatar Apr 22 '24 20:04 jingczhang

Hi @ivelichkovich, sorry for not making my point clear. I meant to suggest not to limit a whereabouts node agent to only one network slice. Here are more details for your review: (1) Limiting a node agent to one network slice effectively remove the need for "lease lock", since the locking will always be successful (2) We can use the existing "lease lock" workflow for the node agent to require access to other network slice (not full) when its primary slice is full. (3) When a new node added, it can take a free network slice (not assigned to any node yet).

we discussed this in maintainers meeting, lease is still needed because you can run multiple network-attachment-definitions for same node as well as each node can allocate multiple IPs at same time and each launches a new whereabouts process

ivelichkovich avatar May 09 '24 21:05 ivelichkovich

note to self: clean imports

ivelichkovich avatar May 09 '24 21:05 ivelichkovich

Pull Request Test Coverage Report for Build 9844370844

Details

  • 315 of 590 (53.39%) changed or added relevant lines in 4 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-17.3%) to 54.615%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/config/config.go 0 1 0.0%
pkg/iphelpers/iphelpers.go 37 45 82.22%
pkg/storage/kubernetes/ipam.go 23 124 18.55%
pkg/node-controller/controller.go 255 420 60.71%
<!-- Total: 315 590
Totals Coverage Status
Change from base Build 9746321392: -17.3%
Covered Lines: 1438
Relevant Lines: 2633

💛 - Coveralls

coveralls avatar May 23 '24 21:05 coveralls

Might be worth marking this feature as experimental in the docs until we've built out more of the phases from the proposal and had more baketime/testing time

ivelichkovich avatar May 28 '24 16:05 ivelichkovich

Hi @ivelichkovich, I'm about to start reviewing this PR but I wanted to understand the design first. From the proposal, it is not clear to me how the range is divided.

Could you please elaborate how a range set in the IPAM config is divided between the nodes, assuming the node_slice_size represents a division larger than the current number of nodes? What would be the range in the NodeSlicePool for each node? i.e. range 192.168.1.0/24, node_slice_size /26, 2 nodes

What happens if the number of nodes increases? i.e. nodes increases to 6

What does the new controller do when a node is unreachable?

Thanks.

mlguerrero12 avatar Jun 06 '24 12:06 mlguerrero12

Hi @ivelichkovich, I'm about to start reviewing this PR but I wanted to understand the design first. From the proposal, it is not clear to me how the range is divided.

Could you please elaborate how a range set in the IPAM config is divided between the nodes, assuming the node_slice_size represents a division larger than the current number of nodes? What would be the range in the NodeSlicePool for each node? i.e. range 192.168.1.0/24, node_slice_size /26, 2 nodes

What happens if the number of nodes increases? i.e. nodes increases to 6

What does the new controller do when a node is unreachable?

Thanks.

Hey so this requires running a controller in the cluster, that controller is responsible for going for creating and managing the NodeSlicePools (resource representing node allocations). When nodes are added it assigns the nodes to a open "slice". If there's too many nodes it just skips them but it could fire an event or something like that. If a node is not reachable I don't think it'll be removed but when the node itself is actually deleted from the cluster it's "slice" will open up again.

ivelichkovich avatar Jun 06 '24 16:06 ivelichkovich

Appreciate all the hard work on this -- huge benefit to the whereabouts community, hugely appreciated.

dougbtv avatar Jul 23 '24 14:07 dougbtv