cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
EKS: Bastion seems unable to access nodes created with managedmachinepool
/kind bug
What steps did you take and what happened: EKS: Bastion seems unable to access nodes created with managed machine pool
apiVersion: v1
items:
- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: AWSManagedControlPlane
metadata:
name: undistro-quickstart
namespace: default
spec:
version: 1.18
bastion:
allowedCIDRBlocks:
- 0.0.0.0/0
enabled: true
kind: List
metadata:
resourceVersion: ""
selfLink: ""
---
apiVersion: v1
items:
- apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: undistro-quickstart
namespace: default
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: AWSManagedControlPlane
name: undistro-quickstart
namespace: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSManagedCluster
name: undistro-quickstart
namespace: default
kind: List
metadata:
resourceVersion: ""
selfLink: ""
---
apiVersion: v1
items:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSManagedMachinePool
metadata:
name: undistro-quickstart-mp-0
namespace: default
spec:
additionalTags:
infra-node: "true"
amiType: AL2_x86_64
eksNodegroupName: undistro-quickstart-mp-0
instanceType: t3.medium
labels:
node-role.undistro.io/infra: "true"
providerIDList:
- aws://us-east-1a/i-0ea03ba6d7a45ae8f
remoteAccess:
sshKeyName: undistro
roleName: nodes.cluster-api-provider-aws.sigs.k8s.io
scaling:
maxSize: 5
minSize: 1
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSManagedMachinePool
metadata:
name: undistro-quickstart-mp-1
namespace: default
spec:
amiType: AL2_x86_64
eksNodegroupName: undistro-quickstart-mp-1
instanceType: t3.medium
providerIDList:
- aws://us-east-1b/i-00c65e95d09cccf74
- aws://us-east-1a/i-0244c8034e18380e7
- aws://us-east-1b/i-0ffb2076a1ca004b7
remoteAccess:
sshKeyName: undistro
roleName: nodes.cluster-api-provider-aws.sigs.k8s.io
scaling:
maxSize: 5
minSize: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
What did you expect to happen:
Be able to connect to bastion and then connect to node using private IP
Environment:
- Cluster-api-provider-aws version: v0.6.4
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-contributor-experience at kubernetes/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@richardcase: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/priority backlog /lifecycle frozen
@richardcase up to you how important this is. I've generally been advising people to use SSM Session Manager instead of a bastion.
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.