Support erasure codes in object service
Erasure codes can be implemented on a containers with REP 1 policy. One replica doesn't make sense in terms of netmap placement algorithm, so it can notify node or the client, that the objects in this container are split with erasure encoding scheme. Details of that scheme may be stored in container attributes.
Uploading / downloading scheme will be different. During payload split, we create new object with actual payload and parity data. Those objects may be linked the same way as they linked now with child links and zero-object. All these objects are stored in one copy as REP 1 describes by object placement rules.
Doing it per regular object implies splitting into many smaller parts plus parity, which can be done, but then there are questions:
- one node can have many disks and we distribute objects per-node (can be mitigated by running multiple nodes per-machine)
- perfect part-disk match can not be achieved, multiple parts can be written to the same node/disk
- disk failure is object loss in this case and something has to recreate it
- expanding the cluster can't affect old objects, it's not clear how new ones are gonna be changed
Can be inefficient for small (like 1K) objects, also.