Venkata krishnan Sowrirajan

Results 27 comments of Venkata krishnan Sowrirajan

Can you try running `aws s3 ls s3://public-qubole/lambda/spark-2.1.0-bin-spark-lambda-2.1.0.tgz`? I tried with a different set of keys and the copy worked.

Alright, that was an issue from our side, the keys owner didn't allow public to do GET operation. I think now it should be fine, we changed the owner permissions....

Hey @webroboteu I remember facing this issue during its development. I wanted to get back to this issue and fix it but if you fast upload of s3a it should...

Right. But you can just compile with the existing open source 2.6.0 hadoop version and just copy the hadoop-aws jar later to your binary that should work as well. This...

spark-shell (Spark Driver) has to be brought in an AWS EC2 or ECS container which is in the same VPC as the lambda function, also you need to create the...

I think the issue is LambdaSchedulerBackend is not created, you have to pass another config `--conf spark.master lambda://` or something like that. This is the code (`spark-on-lambda/core/src/main/scala/org/apache/spark/scheduler/cluster/LambdaSchedulerBackend.scala`) which talks to...

Nice. I think the executors haven't still registered with the Spark Driver. Please check the cloudwatch logs, that would have some info I believe.

Hey DimitarKum, Thanks for trying this out. Its pending on my side to resolve this issue. I have to update the documentation. Last time when I discussed with faromero, these...

@habemusne Thanks for trying Spark on Lambda out. I understand in its current form its not easy to set it up and try out. Some time back @faromero also had...

Quickly checking found that these 2 configs have to be set with access key and secret key ``` spark.hadoop.fs.s3n.awsAccessKeyId spark.hadoop.fs.s3n.awsSecretAccessKey ``` May be can you check if it fails in...