Update "Kubenet networking" section to clarify production deployment language
Hey folks,
While working on engineering and architecture for a customer's greenfield AKS deployments, we have been reviewing the AKS CNI v. Kubenet decision points. The environment will be Linux, .net applications, and they have a requirement for conserving IP address ranges due to legacy vnet and subnet allocations.
I believe an update to this page is required, the "Kubenet networking" section reads: "_For most production deployments, you should plan for and use Azure CNI networking." I think the statement needs clarification, as the "implication" is Kubenet is NOT useable for production. Similarly, the 3 preceding bullets all imply kubenet is NOT suitable for production. We know that Kubenet is fully production ready and is in use by Azure Spring Cloud.
I would suggest the following update:
For production deployments, both kubenet and Azure CNI are valid options. Environments which require separation of control and management, Azure CNI may the preferred option. However, services such as “Calico” and Azure Network Security Groups can serve to protect resources for Kubenet deployments. Additionally, kubenet is suited for Linux only environments, and, where IP address range conservation is a priority.
Thanks so much
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
- ID: 47be2fa0-5152-357d-35a8-74d037281d55
- Version Independent ID: b98a9346-8afa-3a6a-638d-95677e2aa08c
- Content: Best practices for network resources - Azure Kubernetes Service
- Content Source: articles/aks/operator-best-practices-network.md
- Service: container-service
- GitHub Login: @zr-msft
- Microsoft Alias: zarhoads
Created bug to address issue. I've targeted to post-Ignite since the team is focusing on Ignite content over the next couple of months.
@garyciampa,
Thanks for surfacing this feedback! We have assigned the issue to the content author to investigate further and update the document as appropriate.
We have an internal bug for this issue. #please-close