arlon
arlon copied to clipboard
Manage (e.g. apply profile to) external EKS clusters that are on private VPC
As cluster admin I would like to be able to apply profiles to my existing clusters. This makes Arlon significantly easier for me to adopt as I don't need to start from scratch. It will also help standardize my environment very quickly.
As I'm using EKS Arlon will need to work with my Private VPC and Public VPC clusters. Many clusters we run have private API endpoints for security reasons.
Aha! Link: https://pf9.aha.io/features/ARLON-166
Duplicate of #46
@bcle who is working on this?
I think this is totally subsumed by "external cluster management". 0.3.x has support for ECM using the current-gen profiles. Next-gen profiles, assuming they also support external clusters, should work with existing EKS cluster if the cluster is registered in argocd.
This feature requirement may need investigation and work: "work with my Private VPC and Public VPC clusters". I'm going to rename the issue based on this if that's ok.
All manifest deployment to workload clusters is done by ArgoCD. So if ArgoCD has access to the external cluster on private VPC, then it should work. So it depends on connectivity from the management cluster hosting ArgoCD to the workload clusters. This may be a documentation/configuration task. ~~I am moving out of 0.10.0 for now as it may require more analysis~~
Moving to v0.11 since other urgent issues needed more bandwidth and this needs more thought.
Hey team! Please add your planning poker estimate with Zenhub @bcle @cruizen @jayanth-tjvrr @Rohitrajak1807 @ShaunakJoshi1407
All manifest deployment to workload clusters is done by ArgoCD. So if ArgoCD has access to the external cluster on private VPC, then it should work. So it depends on connectivity from the management cluster hosting ArgoCD to the workload clusters. This may be a documentation/configuration task. ~I am moving out of 0.10.0 for now as it may require more analysis~
@bcle I did some experimentation on reachability to a cluster on a private subnet of a VPC, so here is what I found:
- since the cluster is on a private subnet it's publicly inaccessible
- to connect to the cluster(say just to run kubectl commands), we need a jumphost(say ec2 instance) on the same VPC but having a public subnet(so that we can connect to the instance).
- Since argocd too makes some kubernetes API calls to create a service account and related resources, I believe we can go for a similar setup(i.e. our management cluster shares the VPC, but is associated to public subnets).
I haven't confirmed the approach in the last point(it is a WIP), I'll post further findings here.
All manifest deployment to workload clusters is done by ArgoCD. So if ArgoCD has access to the external cluster on private VPC, then it should work. So it depends on connectivity from the management cluster hosting ArgoCD to the workload clusters. This may be a documentation/configuration task. ~I am moving out of 0.10.0 for now as it may require more analysis~
@bcle I did some experimentation on reachability to a cluster on a private subnet of a VPC, so here is what I found:
- since the cluster is on a private subnet it's publicly inaccessible
- to connect to the cluster(say just to run kubectl commands), we need a jumphost(say ec2 instance) on the same VPC but having a public subnet(so that we can connect to the instance).
- Since argocd too makes some kubernetes API calls to create a service account and related resources, I believe we can go for a similar setup(i.e. our management cluster shares the VPC, but is associated to public subnets).
I haven't confirmed the approach in the last point(it is a WIP), I'll post further findings here.
Trying to create a cluster with a pre-configured public subnet on the VPC on which the cluster on private-only subnet exists fails via eksctl
this will need further examination. The eksctl
docs do state how to do this, but the cluster creation times out. The worker nodes never become ready, as a result some of the kube-system
pods are in a perpetual Pending state.
I ran the reachability analysis on AWS console to verify that these public subnets that I am using are reachable from the Internet using the reachability analyser. This issue might need a SPIKE.
After some digging, I found this guide by rackspace which could help us define networking properly https://docs.rackspace.com/blog/creating-a-kubernetes-cluster-on-aws-eks/