NanaZip
NanaZip copied to clipboard
feature request tar.gz in a single step
Hello
could be fine to have the possibility to create (compress or decompress) a tar.gz in a single step . Only ashampoo could do this , as I know ( but it's adware )
Thank you for considering. Graziano
Also request this feature to support (de)compressing tar.zst in one step. Thanks a lot.
options for archives containing .tar files would be quite useful, found this issue as i wanted to request decompressing .tar.gz files in one step
you shouldn't use tar.something in a first place, it is a retarded format. i guess it is a reson why it's so popular in unix world
@megapro17 why use a slur to insult a file format, of all things, and users of unix? i implore you to act civilly, or else you're just breaking the agreed upon contributor code of conduct
tar archives continue to be widely used because of they fit the purpose well, with high ecosystem compatibility. tar groups files, including metadata such as unix permissions, while the purpose of the second archive is compression
as far as overhead of double extraction goes, it's entirely possible to match the read/write operations of other archive formats, rather than two steps. tar does this with its -z and -Z flags
None of these options eleminate two pass extacting nature, if i'm right. It is not possible to see file list without workarounds. It is not possible to extract one file from achive. Other disadvantages. That's just an excuse to use ancient, horrible archive format instead of proper one. The right solution is extend 7z or other to support all these features. There is no a single reason to use tar from 1979 in 2022. There is a lot bad things still used in unix world just because they used to or don't care, like x11
None of these options eleminate two pass extacting nature, if i'm right. It is not possible to see file list without workarounds. It is not possible to extract one file from achive. Other disadvantages. That's just an excuse to use ancient, horrible archive format instead of proper one. The right solution is extend 7z or other to support all these features. There is no a single reason to use tar from 1979 in 2022. There is a lot bad things still used in unix world just because they used to or don't care, like x11
Yes, you are right about "two pass extracting is unavoidable", but it is doable, right?
I think it is legit for an archive manager to deal with these underlying complications to offer a better user experience considering the tar
and other CLI tool had already done that, not many GUI apps though.
It is up to you to determine whether this project should support it or not, but really, no need to insult the tar.gz
format.
I believe it fits the UNIX philosophy well: do one thing and do it well.
And of course, we can have different "One Thing".
PS: I found this project from GitHub recommendation, and was interested to see if this feature is supported. I would love to try it if it does someday.
A few points.
two pass extacting nature
Not in CPU, but certainly on-disk. It's always been piped with proper tar
programs.
It is not possible to extract one file from achive.[sic]
Wrong.
- With plain tar you can do one file plus whatever comes before, which on average saves you 50% of the time.
- With an indexed compressor format, such as pixz and GNU lzip, you can know which block to look for in the compressed stream. Blocks reduce the solid size, yes, but they are needed for parallel compression anyways.
no a single reason to use tar from tar from 1979 in 2022
There's plenty. With PAX format tar is able to store any key-value property associated with a file. It has also been coping well with all the links, the permissions, the xattrs, things important for whole-system archives the 7z keeps losing.
Like why do you think they used to install systems with GHOST instead of, I don't know, 7z?
The right solution is extend 7z or other to support all these features.
I don't disagree with this one. For all I know, just do it like PAX. Perfectly extensible and backward compatible because it just reuses the directory layout.
NOTE: It's actually unclear whether PAX-style is the worst way ever.
- The obvious approach is to add a
:STREAMNAME
like NTFS as a k-v store, but we'd be breaking Unix filenames with:
. That problem can in turn be solved with escaping, but how deep do you want the escapes to go? - The "elegant" way would be to add a header field with variable length. The problem is older clients would not know how to skip the field because there's no facility for it.
unix world just because they used to or don't care, like x11
You know some distributions have bravely switched to Wayland, right?
Actually, there is some degree of random access even in more common formats. xz -T
always generates a concatenation blocks, although these blocks are not aligned to tar file boundary. Nanazip can still use this to its advantage to skip some decomression: for a file spanning multiple blocks, it can skip to the block where the next file should be directly without decompressing what's in between. That could speed up single-file compression and listing.
No such skipping-trick is available in gzip and bzip2, because although compressed streams can be concatenated, there is no mandatory field for decompressed size. iipc/warc-specifications#47 mentions an extension field for gzip, but it's not commonly used.