codeflare
codeflare copied to clipboard
Support better integration between Ray and Spark in passing ObjectRef without actually moving data
Overview
As a Codeflare user, I want to use Ray and Spark alternately to execute my end-to-end ML jobs. Some steps might be executed more efficiently using Ray, while others using Spark. The plasma store in Ray seems to provide an efficient way to share ObjectRef between Ray and Spark. Currently, RayDP project supports from Spark to Ray in some limited way, by running Spark as a Ray actor. However, ObjectRef cannot be shared easily in both directions, Spark-2-Ray and Ray-2-Spark.
Acceptance Criteria
-
Pandas dataframe
created by remote tasks in local Ray plasma stores can be passed withObjectRef
to the Spark driver to create aSpark dataframe
containing list ofObjectRef
. - Once that is done, on the Spark side, the executors of Spark can then access to the original Pandas dataframe locally.
- From Spark to Ray: Spark preserves
groupby()
partition semantics and writes these partitions to plasma store, instead of usinghashPartition()
.
Questions
- In RayDP, only the driver node knows about and can access Ray. The executors of PySpark doesn't have access to Ray. This will prevent the PySpark executors from accessing the Ray plasma store. As a result, it is not possible to seamlessly pass
ObjectRef
between Ray workers and Spark executors.
Assumptions
- Ray and Spark can share data seamlessly by exchanging ObjectRef among Ray workers and Spark executors.
Reference
[Reference] I have opened an issue on the RayDP repo: https://github.com/oap-project/raydp/issues/164
@klwuibm Suggest that you fill in the rest of the issue template? :)
Thanks @klwuibm !
This feature can be supported via the Ray Datasets (currently on alpha with some missing methods, such as ray.data.from_spark()
and ds.to_spark()
). For example, to exchange data from Spark to Pandas, one can do ds = ray.data.from_spark()
followed by pdf = ds.to_pandas()
. Similarly, from Pandas to Spark, one can do ds = ray.data.from_pandas()
followed by sdf = ds.to_spark()
.