spark-ec2 icon indicating copy to clipboard operation
spark-ec2 copied to clipboard

Submitting to EC2 cluster

Open cantide5ga opened this issue 7 years ago • 10 comments

I'm surprised that I wasn't able to find spark-submit anywhere on the master.

What are other folks doing to submit to Spark when using spark-ec2? Using an external system with it's own Spark package to remotely spark-submit? How would that work for code deployed and disseminated across the cluster?

cantide5ga avatar Mar 29 '17 18:03 cantide5ga

spark-submit should be in the master in /root/spark if the setup completed successfully

shivaram avatar Mar 29 '17 18:03 shivaram

@shivaram This is good to hear - but went through the process multiple times and /root/spark only has /conf.

I'll dig in some more to see if I come up with something, thanks! Will follow-up shortly.

cantide5ga avatar Mar 29 '17 18:03 cantide5ga

Confirmed a couple more times and seemingly no errors on my end. If this isn't an issue for anyone else, any tips for figuring out what is going on here?

cantide5ga avatar Mar 29 '17 19:03 cantide5ga

oh, this wasn't loud enough in the logs:

Initializing spark
--2017-03-29 19:05:47--  http://s3.amazonaws.com/spark-related-packages/spark-1.6.2-bin-hadoop1.tgz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.1.75
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.1.75|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-03-29 19:05:47 ERROR 404: Not Found.

ERROR: Unknown Spark version
spark/init.sh: line 137: return: -1: invalid option
return: usage: return [n]
Unpacking Spark
tar (child): spark-*.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
rm: cannot remove `spark-*.tgz': No such file or directory
mv: missing destination file operand after `spark'

Read the docs that we could specify the Spark package. Is it required?

cantide5ga avatar Mar 29 '17 19:03 cantide5ga

Read the docs that we could specify the Spark package. Is it required?

Bump to this. Willing to push an update to make this required if the above is expected behavior when not specifying repo url or version.

cantide5ga avatar Apr 11 '17 20:04 cantide5ga

I think this is a specific problem with hadoop version 1 and spark 1.6.2. can you try passing hadoop version as 2 or yarn and see if it works

shivaram avatar Apr 12 '17 07:04 shivaram

To be clear, I've been getting past this by specifying a commit hash which I prefer anyhow. But yes, I will give this a try to provide some feedback. Thanks!

cantide5ga avatar Apr 13 '17 11:04 cantide5ga

adding --hadoop-major-version 2 to launch fixed it.

Anything we should do to either circumvent in code and/or document? Feel free to close if not.

cantide5ga avatar Apr 19 '17 21:04 cantide5ga

I think it would be great if we could change the default to not be the failure case -- Can you send a PR changing the default hadoop version to either 2 or yarn ?

shivaram avatar Apr 20 '17 00:04 shivaram

You got it. Busy next few days but will follow through.

Will also include some documentation on the use of --hadoop-major-version which is seemingly missing from README.

Thanks again.

cantide5ga avatar Apr 20 '17 03:04 cantide5ga