incubator-uniffle
incubator-uniffle copied to clipboard
[Bug] Blocks read inconsistent: expected xxx blocks, actual xxx blocks
- If we set
spark.rss.data.replica.write=2
andspark.rss.data.replica=3
,Data integrity cannot be guaranteed in any one shuffle server. right? - But in method
org.apache.uniffle.storage.handler.impl.LocalFileQuorumClientReadHandler#readShuffleData
, it just read from one shuffle server
Which version did you use?
Do you set spark.rss.data.replica.read=2
? It ensures the bitmap metadata of blocks to be written to 2 servers.
As long as the read client gets the metadata from the 2 of servers, it can check the integrity of data from any one of server.
Do you set
spark.rss.data.replica.read=2
Yes
As long as the read client gets the metadata from the 2 of servers, it can check the integrity of data from any one of server.
But this step seems execute before readShuffleData
Which version did you use
internal version 0.5.0-snapshot
Do you set
spark.rss.data.replica.read=2
Yes
As long as the read client gets the metadata from the 2 of servers, it can check the integrity of data from any one of server.
But this step seems execute before
readShuffleData
The metadata is acquired in advance, but data integrity check is executed when all blocks have been fetched. In current implementation, the client will only fetch “the first available” server to avoid the read cost. But when the data in this first server is damaged, the final check will report "read inconsistent".
I know, but the application will fail
Do you set
spark.rss.data.replica.read=2
Yes
As long as the read client gets the metadata from the 2 of servers, it can check the integrity of data from any one of server.
But this step seems execute before
readShuffleData
The metadata is acquired in advance, but data integrity check is executed when all blocks have been fetched. In current implementation, the client will only fetch “the first available” server to avoid the read cost. But when the data in this first server is damaged, the final check will report "read inconsistent".
I feel a little unreasonable about this implement. Should we read next shuffle server when the data isn't complete?
I feel a little unreasonable about this implement. Should we read next shuffle server when the data isn't complete?
I am trying to do this, and i think it needs to be fixed with #108 together
I would be happy to review this PR, and you should avoid to fetch redundancy blocks from the another server (because the spark has consumed this blocks). Rss has provided some skipping mechanisms for localfile and hdfs. But I'am worry about memory data. @jerqi
I would be happy to review this PR, and you should avoid to fetch redundancy blocks from the another server (because the spark has consumed this blocks). Rss has provided some skipping mechanisms for localfile and hdfs. But I'am worry about memory data. @jerqi
In my opinion, memory data should also have data skip ability, and our read memory process should be optimized.
Get
This will change server's memory storage to add "index" like hdfs
This will change server's memory storage to add "index" like hdfs
This problem will should discuss in another issue, we also should have a simple design doc.
closed by #276