tobert.github.io
tobert.github.io copied to clipboard
No need to sc.stop with latest spark-cassandra-connector
if you add spark.cassandra.connection.host to the spark-defaults.conf, you can alleviate the sc.stop when using the spark-shell
cat > spark-defaults.conf <<EOF
spark.master spark://node0.pc.datastax.com:7077
spark.executor.memory 512m
spark.eventLog.enabled true
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.cassandra.connection.host node1.pc.datastax.com [or comma separated list of nodes]
EOF
Now that spark.cassandra.connection.host is set the spark-shell will add it to the SparkContext for use which means the below step can be removed.
scala> sc.stop
Finally, the value cassandra.contection.host is no longer valid and has been replace with the spark.cassandra.connection.host property.
-Todd
Cool! Will test today. Thanks!