Subfiles of root folder in ZIP archive are ignored
When i extract zip file, only root folder appears in output directory, subfiles are ignored.
SharpCompress 0.36.0
Archive structure
\setons_reworked.v0010\
\setons_reworked.v0010\setons_reworked_script.lua
\setons_reworked.v0010\setons_reworked_scenario.lua
\setons_reworked.v0010\setons_reworked_save.lua
\setons_reworked.v0010\setons_reworked.scmap
Output folder
\setons_reworked.v0010
var extractOptions = new ExtractionOptions
{
Overwrite = true,
ExtractFullPath = true,
};
using var stream = File.OpenRead(archivePath);
using IReader archive = ext switch
{
".zip" => SharpCompress.Readers.Zip.ZipReader.Open(stream),
_ => ReaderFactory.Open(stream)
};
while (archive.MoveToNextEntry())
{
archive.WriteEntryToDirectory(outputDirectory, extractOptions);
}
Attached file setons_reworked.v0010.zip
Note: .net ZIP impl. works fine
System.IO.Compression.ZipFile.ExtractToDirectory(archivePath, outputDirectory, true);
doing a brief test, I saw that the zip was different in that there's a zero byte header somewhere.
Using the ArchiveFactory I get all 5 entries because it uses the zip dictionary. ZipFile does the same.
ReaderFactory doesn't use the zip dictionary so it has to be a certain structure. Right now, I don't know what that zero byte header is.
There is an extra two bytes at the end of the first Local Entry header, 0x03 0x00, that occur before the Post Data Descriptor header. The ZipReader reads the next 4 byte header as 0x4B500003 instead of reading the Post Data Descriptor header of 0x08074B50. Because it doesn't recognize it has a header, it just says it's done and doesn't give any more entries.
Only question I have at the moment is where the 0x03 0x00 comes from and why 7-zip isn't mad about it. The extra field length in the local header says it's zero bytes, so there shouldn't be anything after the file name field, which is 22 bytes. There are 24 bytes of data with that 0x03 0x00. Because ZipReader only read forward and doesn't have the Central Directory header to rely on, it gives up when it can't find the next header. I feel like this zip file is malformed, but 7-zip isn't reporting any errors...
I imagine 7Zip is also using central directory....I haven't used it in years so don't know if it would report something malformed.
I wonder what made that zip file.
I agree that 7zip is probably just using the CD.
However, if they were streaming it in for some reason, it appears 7zip reads the local header and, then, if the item has a data descriptor, it "looks around" for it. So, it can skip over any errant bytes that might be there. Guessing they've seen something like this before? Don't think there is any way for SharpCompress to handle this unless it did the same.
ReadLocalItem(item);
item.FromLocal = true;
bool isFinished = false;
if (item.HasDescriptor())
{
RINOK(FindDescriptor(item, items.Size()))
isFinished = !item.DescriptorWasRead;
}
else
{
if (item.PackSize >= ((UInt64)1 << 62))
throw CUnexpectEnd();
RINOK(IncreaseRealPosition(item.PackSize, isFinished))
}
I think, @Eternal-ll , you'll have to avoid a Reader if you want to read these zip files and use one of the other options, @adamhathcock mentioned that uses the Central Directory headers.
Reader being forward-only has a lot of restrictions that must libs don't do. Not really a good way to workaround this for Readers without loosing what good a Reader is.
I think, @Eternal-ll , you'll have to avoid a Reader if you want to read these zip files and use one of the other options, @adamhathcock mentioned that uses the Central Directory headers.
Sure, i used read factory only to expose some internal extraction progress.
I wonder what made that zip file.
It was probably made by desktop client on Java that used implementation below. https://github.com/FAForever/faf-java-commons/blob/1013d75fc97698a581792aace1e2b5580bc69906/faf-commons-data/src/main/java/com/faforever/commons/io/Zipper.java#L16
I guess that's the apache implementation putting something extra we don't expect or against spec. Of course, the zip spec is fluid at best
Ah, I think I see what it is. For some reason, the Apache library is writing out 2 bytes for a directory. No idea what those represent; still puzzling through where ZipArchiveOutputStream and ZipArchiveEntry is getting that in the Apache library.
The PostDataDescriptor gives it away because it says the entry is 2 bytes compressed and 0 bytes uncompressed. So, the Apache implementation compressed 0 bytes into 2 bytes. :-P I guess SharpCompress could try to handle arbitrary data for a directory and just discard it? What do you think, @adamhathcock ?
EDIT: I think this is actually a bug in the Apache zip library for directories. They flush their deflater to the output stream even if nothing was written. That's why the uncompressed size is 0; they didn't read anything to send to the deflater.
I mean, we can add an explicit boolean property to handle this but I wouldn't want to add it generically.
It does sound like a bug to me.
Thanks for investigating the issue, i think we can leave things as it is. It is not that much of problem for me as we got alternative solutions.
I will contact devs of java client, maybe they would look into that.