spark-ec2
spark-ec2 copied to clipboard
Submitting to EC2 cluster
I'm surprised that I wasn't able to find spark-submit
anywhere on the master.
What are other folks doing to submit to Spark when using spark-ec2? Using an external system with it's own Spark package to remotely spark-submit
? How would that work for code deployed and disseminated across the cluster?
spark-submit
should be in the master in /root/spark
if the setup completed successfully
@shivaram This is good to hear - but went through the process multiple times and /root/spark
only has /conf
.
I'll dig in some more to see if I come up with something, thanks! Will follow-up shortly.
Confirmed a couple more times and seemingly no errors on my end. If this isn't an issue for anyone else, any tips for figuring out what is going on here?
oh, this wasn't loud enough in the logs:
Initializing spark
--2017-03-29 19:05:47-- http://s3.amazonaws.com/spark-related-packages/spark-1.6.2-bin-hadoop1.tgz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.1.75
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.1.75|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-03-29 19:05:47 ERROR 404: Not Found.
ERROR: Unknown Spark version
spark/init.sh: line 137: return: -1: invalid option
return: usage: return [n]
Unpacking Spark
tar (child): spark-*.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
rm: cannot remove `spark-*.tgz': No such file or directory
mv: missing destination file operand after `spark'
Read the docs that we could specify the Spark package. Is it required?
Read the docs that we could specify the Spark package. Is it required?
Bump to this. Willing to push an update to make this required if the above is expected behavior when not specifying repo url or version.
I think this is a specific problem with hadoop version 1 and spark 1.6.2. can you try passing hadoop version as 2 or yarn and see if it works
To be clear, I've been getting past this by specifying a commit hash which I prefer anyhow. But yes, I will give this a try to provide some feedback. Thanks!
adding --hadoop-major-version 2
to launch
fixed it.
Anything we should do to either circumvent in code and/or document? Feel free to close if not.
I think it would be great if we could change the default to not be the failure case -- Can you send a PR changing the default hadoop version to either 2
or yarn
?
You got it. Busy next few days but will follow through.
Will also include some documentation on the use of --hadoop-major-version
which is seemingly missing from README.
Thanks again.