incubator-uniffle
incubator-uniffle copied to clipboard
[Improvement] Read shuffle data fail because read index file fail
Every commit calls must success when sendCommit
now, this will casue if one shuffle server dead, then application fail
We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.
We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.
So, should we remove LOCALFILE
?
We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.
So, should we remove
LOCALFILE
?
It's useful for test code. And if we want to reduce our shuffle server state, it's a good way to commit data although the LOCALFILE storageType don't have a good practice. So I think we shouldn't remove it currently.
I found if use MEMORY_LOCALFILE
, finishShuffle
will not be called, and buffer in server side may not flush in time, and than reader will fail because read index file fail. as follows
Error happened when get shuffle index for appId[application_xxx], shuffleId[3], partitionId[1], Can't find folder /HDATA/2/rssdata/application_xxx/3/1-1
I found if use
MEMORY_LOCALFILE
,finishShuffle
will not be called, and buffer in server side may not flush in time, and than reader will fail because read index file fail. as followsError happened when get shuffle index for appId[application_xxx], shuffleId[3], partitionId[1], Can't find folder /HDATA/2/rssdata/application_xxx/3/1-1
@xianjingfeng With current implementation, write shuffle data to N shuffle server can handle the situation about shuffle server failed. But it will cost N times storage. User can make such choice.
We had set spark.rss.data.replica.write=2
and spark.rss.data.replica=3
.But we found all shuffle server of a partition have not flush in time today and we have found in two applications. It may be easy to encounter when our cluster is not in high load.
We had set
spark.rss.data.replica.write=2
andspark.rss.data.replica=3
.But we found all shuffle server of a partition have not flush in time today and we have found in two applications. It may be easy to encounter when our cluster is not in high load.
what kind of storage type used in your case?
what kind of storage type used in your case?
MEMORY_LOCALFILE
@frankliee can you do more clarification about how to config spark.rss.data.replica.write
& spark.rss.data.replica.read
?
These configs are come from quorum protocol.
rss.data.replica
is default replica number of partition.
rss.data.replica.write
is the minimum replica that writer should write metadata and data successfully.
rss.data.replica.read
is the minimum replica that reader should read metadata successfully (data can read from only one replica).
So the recommended values are (1,1,1) and (3,2,2).
These are client-side configs, and will not change server-side state. The flush is controlled by server configs, such as memory capacity and watermarks.
So, Is there a problem?
Fix by #213