cachix
                                
                                 cachix copied to clipboard
                                
                                    cachix copied to clipboard
                            
                            
                            
                        Error out of there's more than 10k parts
When pushing a very large derivation to the cache, it seems cachix may choose a chunk size that results in > 10,000 parts. This results in a 400 response code from S3 upon reaching partNumber = 10001.
The S3 API specifies.
partNumber Part number of part being uploaded. This is a positive integer between 1 and 10,000. Required: Yes
@cmoog, how large is the derivation you're uploading?
We bumped the default to 32MiB in 1.7.2. That gets you a max of 320GiB per derivation. You can also increase the --chunk-size up to 5GiB and adjust num-concurrent-chunks to control memory usage.
It'd be nice to have a warning for NARs that would exceed the part limit , but we compress on-the-fly and have no way of knowing the final size of the compressed NAR, unless we just set an arbitrary limit for uncompressed NARs.
The uncompressed derivation was 215.94 GiB.  That's good to know regarding version 1.7.2, I encountered this error with 1.6.1.
It'd be nice to have a warning for NARs that would exceed the part limit , but we compress on-the-fly and have no way of knowing the final size of the compressed NAR, unless we just set an arbitrary limit for uncompressed NARs.
Perhaps just a nice error message when partNumber exceeds 10,000 would be helpful– it could include a suggestion to use the flags that you mentioned for controlling chunk size.
If you don't mind me asking, what are you working on that creates such large derivations?
A 3rd party proprietary toolchain included inside a system profile.
@cmoog any success with 1.7?
Yes, got those pushed with 1.7.3! Thank you.