cluster-api-provider-nested
cluster-api-provider-nested copied to clipboard
The pod is always pending in the nested cluster
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
Follow the guide: https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/docs/README.md
- Deploy a nested cluster in Kind
- After nested cluster ready, deploy memcached service
The memcached pod is always pending, but the same deployment can work on real kubernetes environment
# kubectl --kubeconfig ./kubeconfig/kubeconfig.sample get pod
NAME READY STATUS RESTARTS AGE
memcached-0 0/1 Pending 0 6h43m
# kubectl --kubeconfig ./kubeconfig/kubeconfig.sample describe pod memcached-0
Name: memcached-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=memcached
controller-revision-hash=memcached-6b8cf9888
statefulset.kubernetes.io/pod-name=memcached-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/memcached
Containers:
memcached-ct:
Image: memcached:1.5-alpine
Port: 11211/TCP
Host Port: 0/TCP
Args:
memcached
-m
256
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zrvr9 (ro)
Volumes:
default-token-zrvr9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrvr9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
What did you expect to happen: The memcatched pods are running, just like
# kubectl get pod
NAME READY STATUS RESTARTS AGE
memcached-0 1/1 Running 0 12m
memcached-1 1/1 Running 0 12m
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] The deploy yaml for memcached
apiVersion: v1
kind: Service
metadata:
name: memcached
spec:
ports:
- port: 11211
selector:
app: memcached
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: memcached
spec:
selector:
matchLabels:
app: memcached
serviceName: "memcached"
replicas: 2
template:
metadata:
labels:
app: memcached
spec:
restartPolicy: Always
hostname: memcached
containers:
- name: memcached-ct
image: memcached:1.5-alpine
ports:
- containerPort: 11211
args: ["memcached", "-m", "256"]
Environment:
- cluster-api-provider-nested version: v0.10
- Minikube/KIND version: kind v0.11.1
- Kubernetes version: (use
kubectl version): v1.21.1 - OS (e.g. from
/etc/os-release): Red Hat Enterprise Linux 8
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-nested/labels?q=area for the list of labels]
@Fei-Guo @christopherhein @charleszheng44 @gyliu513
Hey @wangjsty this is expected right now, we haven't finished the integration updates to immediately support CAPN + VC, the doc updates should be mostly called out in #141. Which is blocked on updating VC to support the way we're releasing images and manifests w/ CAPN/Prow.
Yes. At this moment, for VC demo, you can try the following demo which use the old clusterversion CR unless your purpose is to try CAPN specifically, https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md
@christopherhein @Fei-Guo Thank you, I will try the VC demo first, before the CAPN+VC integration done.
@christopherhein @Fei-Guo Could you please answer some questions for understanding. Thanks in advanced.
- Currently, the nested cluster that created by CAPN without VC can't be used to deploy workloads(deployment/statefulset) before CAPN+VC integration, is it right?
- Looks like it's okay to deploy workloads on VC without CAPN, what's the user scenario of CAPN? Or it could just be used as a common/unified cluster-API provider for VC? Thanks.
- Yes
- CAPN is the replacement for the original ClusterVersion CR/controller in VC. It follows CAPI standard and has much better manageability for tenant control plane Pods.
To add a little more color to #1 you could technically use this and run a set of data plane nodes w/ VMs or anything else and just use pod based control planes but you'd likely need to do some customizing as well, ie as of today that isn't supported.
@Fei-Guo @christopherhein Thank you for your explanation !
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen