incubator-uniffle icon indicating copy to clipboard operation
incubator-uniffle copied to clipboard

[Improvement] Read shuffle data fail because read index file fail

Open xianjingfeng opened this issue 2 years ago • 11 comments

Every commit calls must success when sendCommit now, this will casue if one shuffle server dead, then application fail

xianjingfeng avatar Jul 29 '22 13:07 xianjingfeng

We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.

jerqi avatar Jul 29 '22 15:07 jerqi

We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.

So, should we remove LOCALFILE?

xianjingfeng avatar Jul 30 '22 03:07 xianjingfeng

We recommend users to use StorageType MEMORY_LOCALFILE_HDFS or MEMORY_LOCALFILE. The application won't commit data.

So, should we remove LOCALFILE?

It's useful for test code. And if we want to reduce our shuffle server state, it's a good way to commit data although the LOCALFILE storageType don't have a good practice. So I think we shouldn't remove it currently.

jerqi avatar Jul 30 '22 04:07 jerqi

I found if use MEMORY_LOCALFILE, finishShuffle will not be called, and buffer in server side may not flush in time, and than reader will fail because read index file fail. as follows Error happened when get shuffle index for appId[application_xxx], shuffleId[3], partitionId[1], Can't find folder /HDATA/2/rssdata/application_xxx/3/1-1

xianjingfeng avatar Aug 03 '22 07:08 xianjingfeng

I found if use MEMORY_LOCALFILE, finishShuffle will not be called, and buffer in server side may not flush in time, and than reader will fail because read index file fail. as follows Error happened when get shuffle index for appId[application_xxx], shuffleId[3], partitionId[1], Can't find folder /HDATA/2/rssdata/application_xxx/3/1-1

@xianjingfeng With current implementation, write shuffle data to N shuffle server can handle the situation about shuffle server failed. But it will cost N times storage. User can make such choice.

colinmjj avatar Aug 03 '22 07:08 colinmjj

We had set spark.rss.data.replica.write=2 and spark.rss.data.replica=3.But we found all shuffle server of a partition have not flush in time today and we have found in two applications. It may be easy to encounter when our cluster is not in high load.

xianjingfeng avatar Aug 03 '22 08:08 xianjingfeng

We had set spark.rss.data.replica.write=2 and spark.rss.data.replica=3.But we found all shuffle server of a partition have not flush in time today and we have found in two applications. It may be easy to encounter when our cluster is not in high load.

what kind of storage type used in your case?

colinmjj avatar Aug 03 '22 08:08 colinmjj

what kind of storage type used in your case?

MEMORY_LOCALFILE

xianjingfeng avatar Aug 03 '22 08:08 xianjingfeng

@frankliee can you do more clarification about how to config spark.rss.data.replica.write & spark.rss.data.replica.read ?

colinmjj avatar Aug 03 '22 08:08 colinmjj

These configs are come from quorum protocol. rss.data.replica is default replica number of partition. rss.data.replica.write is the minimum replica that writer should write metadata and data successfully. rss.data.replica.read is the minimum replica that reader should read metadata successfully (data can read from only one replica). So the recommended values are (1,1,1) and (3,2,2).

These are client-side configs, and will not change server-side state. The flush is controlled by server configs, such as memory capacity and watermarks.

frankliee avatar Aug 03 '22 11:08 frankliee

So, Is there a problem?

xianjingfeng avatar Aug 04 '22 06:08 xianjingfeng

Fix by #213

xianjingfeng avatar Jan 12 '23 07:01 xianjingfeng