Xianyang Liu
Xianyang Liu
Hi @tkram01, could you take a try the following? ```python import ray import raydp ray.init(address='auto') @ray.remote class PySparkDriver: def __init__(self): assert ray.is_initialized() self.spark = raydp.init_spark( app_name='RayDP example', num_executors=10, executor_cores=1, executor_memory="2GB")...
@tkram01 Does this get the same errors on ray 1.1.0?
Very strange. I will take a look at this.
Hi @tkram01, I have submitted a pr to ray upstream (https://github.com/ray-project/ray/pull/14567) to fix this. Could you take a try?
Do you want the spark executor to request those custom resources?
what input and output should we want? And for predicting, does it should be the model serving part?
Hi @yanivg10, you mean you have a cluster with 20 workers and each worker has 80 cores? The raydp will occupy `num_executors * executor_cores` cpus on the cluster after you...
You can refer to https://docs.ray.io/en/master/cluster/quickstart.html
Hi @yanivg10, it is just the exception print. The actual exception has been caught, you can ignore it. The ray community is working on fixing it.
Hi @Hoeze, Spark has their only data format (which called InternalRow) in memory. And as I know the partition of modin dataframe is represented by pandas DataFrame. So the from...