kale icon indicating copy to clipboard operation
kale copied to clipboard

persistant volume and pvc is not creating dynamically

Open pyshahid opened this issue 2 years ago • 1 comments

i have created a kubeflow pipeline but it is not starting because pvc is not created but when i create volume and pvc claims manually that component started worked but i am getting an error like no container

and i deployed kubeflow on a cluster not on any cloud so i suspect that when new component of a pipeline is created a new pvc and pv should be created so can any body help how to do it using kale

pyshahid avatar Dec 20 '21 17:12 pyshahid

Yeah, I got this too. spend hours, idk where to look at

> k describe po/titanic-ml-sbyjq-jvhg4-2878468303
...
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  podmetadata:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations -> annotations
  docker-sock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  Socket
  create-volume-1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  titanic-ml-gqmhh-g64qt-titanic-vol
    ReadOnly:   false
  kale-marshal-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  titanic-ml-d7ptn-thkk8-kale-marshal-pvc
    ReadOnly:   false
  mlpipeline-minio-artifact:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mlpipeline-minio-artifact
    Optional:    false
  kube-api-access-zw8tt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason             Age   From                Message
  ----     ------             ----  ----                -------
  Warning  FailedScheduling   65s   default-scheduler   0/3 nodes are available: 3 persistentvolumeclaim "titanic-ml-gqmhh-g64qt-titanic-vol" not found.
  Warning  FailedScheduling   63s   default-scheduler   0/3 nodes are available: 3 persistentvolumeclaim "titanic-ml-gqmhh-g64qt-titanic-vol" not found.
  Normal   NotTriggerScaleUp  63s   cluster-autoscaler  pod didn't trigger scale-up:

rhzs avatar Dec 21 '21 02:12 rhzs