examples
examples copied to clipboard
Outdated spark image in spark example
Tried to follow the instructions of the spark example on macOS with an M2 chip and a docker-desktop k8s cluster.
Got ImagePullBackOff for the spark-master-controller.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned spark-cluster/spark-master-controller-98fpm to docker-desktop
Normal BackOff 21s (x2 over 48s) kubelet Back-off pulling image "registry.k8s.io/spark:1.5.2_v1"
Warning Failed 21s (x2 over 48s) kubelet Error: ImagePullBackOff
Normal Pulling 7s (x3 over 50s) kubelet Pulling image "registry.k8s.io/spark:1.5.2_v1"
Warning Failed 7s (x3 over 49s) kubelet Failed to pull image "registry.k8s.io/spark:1.5.2_v1": [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/spark:1.5.2_v1 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
Warning Failed 7s (x3 over 49s) kubelet Error: ErrImagePull
I deployed the example and I didn't get any error.
$ kubectl create -f examples/staging/spark/spark-master-controller.yaml
service/spark-master created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-master-controller-dhw68 0/1 ContainerCreating 0 47s
$ kubectl logs spark-master-controller-dhw68
25/02/19 01:06:54 INFO Master: Registered signal handlers for [TERM, HUP, INT]
25/02/19 01:06:55 INFO SecurityManager: Changing view acls to: root
25/02/19 01:06:55 INFO SecurityManager: Changing modify acls to: root
25/02/19 01:06:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
25/02/19 01:06:55 INFO Slf4jLogger: Slf4jLogger started
25/02/19 01:06:55 INFO Remoting: Starting remoting
25/02/19 01:06:55 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark-master:7077]
25/02/19 01:06:55 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
25/02/19 01:06:55 INFO Master: Starting Spark master at spark://spark-master:7077
25/02/19 01:06:55 INFO Master: Running Spark version 1.5.2
25/02/19 01:06:55 INFO Utils: Successfully started service 'MasterUI' on port 8080.
25/02/19 01:06:55 INFO MasterWebUI: Started MasterWebUI at http://172.31.91.209:8080
25/02/19 01:06:56 INFO Utils: Successfully started service on port 6066.
25/02/19 01:06:56 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-master-controller-dhw68 1/1 Running 0 96s
I am encountering the same error.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This is an archived example and will not be updated. See https://github.com/kubernetes/website/pull/52607#discussion_r2402472677 for more context.
/close
@stmcginnis: Closing this issue.
In response to this:
This is an archived example and will not be updated. See https://github.com/kubernetes/website/pull/52607#discussion_r2402472677 for more context.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.