Shivaram Venkataraman

Results 87 comments of Shivaram Venkataraman

I am not sure the exception is due to scala version mismatch (does this happen when you run the client on the master machine ?). Supporting scala-2.11 is a necessity...

We can support it. Would you like to open a PR ?

I see. Those do require more changes including changes to the AMI and Hadoop scripts. Unfortunately I dont have time right now to try out the changes right now.

+1 to what @nchammas said. We unfortunately do not have bandwidth to create new AMIs / update spark-ec2 to match the Spark releases.

I'm not really familiar with what needs to be done to make Spark use Python 3. cc @nchammas who might know more.

Thanks @nchammas - The third option of adding a `run-command` is something is probably the easiest to do as it should be just a code change. The question of AMIs...

I see - The thing I was looking at is the old packer scripts that are part of the issue you linked above[1,2]. Certainly those have a large number of...

Yes - the contents of EC2 directory in Spark is now in the root of this repository. Would you be interested in opening a PR updating the documentation ?

`spark-submit` should be in the master in `/root/spark` if the setup completed successfully

I think this is a specific problem with hadoop version 1 and spark 1.6.2. can you try passing hadoop version as 2 or yarn and see if it works