keystone
keystone copied to clipboard
Move SparkContext -> SparkSession
We are using SparkContext throughout loaders and example pipelines. It makes sense to move these to using SparkSession given that we're relying on Spark 2.0.
Thinking through this change today, I'm not so sure it's necessary at the moment. SparkSession
is part of the SparkSQL namespace and primarily designed to support Dataset
access. We need it in the Amazon pipeline because we're using SparkSQL's json decoding to load up json files, but then immediately convert the result to an RDD.
To really jump on the Spark 2.0 train, I would recommend the following:
- Update all loaders to take a
SparkSession
and return aDataset
. - Modify the pipeline, transformer, and estimator interfaces to take
Dataset[T]
as well asRDD[T]
and do so in a way that takes advantage of the codegen features of spark 2. - Benchmark and make sure we're not giving anything up with this approach, particularly when it comes to cache management and dealing with dense numerical data, a common use case for us.
For the sake of consistency, it would be nice to have the Amazon Loader/Pipeline deal with SparkContexts rather than SparkSessions. Unfortunately, this can't easily happen internally to the loader because there is no public interface for creating a SparkSession given a SparkContext.
I'm happy to leave this issue open, but will probably assign an 0.5.0 milestone to it, since I'd rather see 2 and 3 get handled along with it.
Let me know what you think @tomerk @shivaram
Yeah I think that sounds reasonable.