docker-cloudera-quickstart
docker-cloudera-quickstart copied to clipboard
Spark Worker not starting
Hi, Spark worker is not starting. In the Spark master web ui there is no spark worker listed. When i saw logs there was incorrect spark master url error.
Thanks
Hi! Can you attach the logs please?
15/12/23 10:03:18 INFO worker.Worker: Registered signal handlers for [TERM, HUP, INT] 15/12/23 10:03:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/12/23 10:03:21 INFO spark.SecurityManager: Changing view acls to: spark 15/12/23 10:03:21 INFO spark.SecurityManager: Changing modify acls to: spark 15/12/23 10:03:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 15/12/23 10:03:22 INFO slf4j.Slf4jLogger: Slf4jLogger started 15/12/23 10:03:22 INFO Remoting: Starting remoting 15/12/23 10:03:22 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker@edc0380e49b1:36922] 15/12/23 10:03:22 INFO util.Utils: Successfully started service 'sparkWorker' on port 36922. Exception in thread "main" org.apache.spark.SparkException: Invalid master URL: spark://: at org.apache.spark.util.Utils$.extractHostPortFromSparkUrl(Utils.scala:1981) at org.apache.spark.deploy.master.Master$.toAkkaUrl(Master.scala:879) at org.apache.spark.deploy.worker.Worker$$anonfun$12.apply(Worker.scala:551) at org.apache.spark.deploy.worker.Worker$$anonfun$12.apply(Worker.scala:551) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) at org.apache.spark.deploy.worker.Worker$.startSystemAndActor(Worker.scala:551) at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:529) at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
I fixed the issue by adding below line to /etc/init.d/spark-worker . /etc/profile.d/spark-env.sh
Although this is a very old post, here is an answer which worked for me. Usually, that is the issue with the SPARK_MASTER_HOST, set to the hostname of the node you are using in $SPARK_HOME/conf/spark-env.sh.