pako icon indicating copy to clipboard operation
pako copied to clipboard

Failing for BGZIP'd streaming files

Open abetusk opened this issue 6 years ago • 35 comments

Hi all, thanks for the wonderful library!

Unfortunately I think I've found a bug. Files compressed with bgzip (block gzip) are failing when trying to use pako to do streaming decompression.

The file pako-fail-test-data.txt.gz is an example file that is able to trigger what I believe to be an error. The file itself is 65,569 bytes big, which is just larger than what I assume to be a block size relevant to bgzip (somewhere around 65280?). Here is a small shell session that has some relevant information:

$ wc pako-fail-test-data.txt 
 1858 16831 65569 pako-fail-test-data.txt
$ md5sum pako-fail-test-data.txt 
7eae4c6bc0e68326879728f80a0e002b  pako-fail-test-data.txt
$ zcat pako-fail-test-data.gz | bgzip -c > pako-fail-test-data.txt.gz
$ md5sum pako-fail-test-data.txt.gz 
f4d0b896c191f66ff6962de37d69db45  pako-fail-test-data.txt.gz
$ bgzip -h

Version: 1.4.1
Usage:   bgzip [OPTIONS] [FILE] ...
Options:
   -b, --offset INT        decompress at virtual file pointer (0-based uncompressed offset)
   -c, --stdout            write on standard output, keep original files unchanged
   -d, --decompress        decompress
   -f, --force             overwrite files without asking
   -h, --help              give this help
   -i, --index             compress and create BGZF index
   -I, --index-name FILE   name of BGZF index file [file.gz.gzi]
   -r, --reindex           (re)index compressed file
   -g, --rebgzip           use an index file to bgzip a file
   -s, --size INT          decompress INT bytes (uncompressed size)
   -@, --threads INT       number of compression threads to use [1]

Here is some sample code that should decompress the whole file, but doesn't. My apologies for it not being elegant, I'm still learning and I kind of threw a bunch of things together to get something that I believe triggers the error:

var pako = require("pako"),
    fs = require("fs");

var CHUNK_SIZE = 1024*1024,
    buffer = new Buffer(CHUNK_SIZE);

function _node_uint8array_to_string(data) {
  var buf = new Buffer(data.length);
  for (var ii=0; ii<data.length; ii++) {
    buf[ii] = data[ii];
  }
  return buf.toString();
}

var inflator = new pako.Inflate();
inflator.onData = function(chunk) {
  var v = _node_uint8array_to_string(chunk);
  process.stdout.write(v);
};

fs.open("./pako-fail-test-data.txt.gz", "r", function(err,fd) {
  if (err) { throw err; }
  function read_chunk() {
    fs.read(fd, buffer, 0, CHUNK_SIZE, null,
      function(err, nread) {
        var data = buffer;
        if (nread<CHUNK_SIZE) { data = buffer.slice(0, nread); }
        inflator.push(data, false);
        if (nread > 0) { read_chunk(); }
      });
  };
  read_chunk();
});

I did not indicate an end block (that is I did not do inflator.push(data.false) anywhere) and there are maybe other problems with this in how the data blocks are read from fs but I hope you'll forgive this sloppiness in the interest of simplicity to illuminate the relevant issue.

Running this does successfully decompress a portion of the file but then stops at what I believe to the first block. Here are some shell commands that might be enlightening:

$ node pako-error-example.js | wc
   1849   16755   65280
$ node pako-error-example.js | md5sum
a55dd4f2c7619a52fd6bc76e2af631b8  -
$ zcat pako-fail-test-data.txt.gz | md5sum
7eae4c6bc0e68326879728f80a0e002b  -
$ zcat pako-fail-test-data.txt.gz | head -c 65280 | md5sum
a55dd4f2c7619a52fd6bc76e2af631b8  -
$ zcat pako-fail-test-data.txt.gz | wc
   1858   16831   65569

Running another simple example using browser-zlib triggers an error outright:

var fs = require("fs"),
    zlib = require("browserify-zlib");

var r = fs.createReadStream('pako-fail-test-data.txt.gz');
var z = zlib.createGunzip();

z.on("data", function(chunk) {
  process.stdout.write(chunk.toString());
});
r.pipe(z);

And when run via node stream-example-2.js, the error produces is:

events.js:137
      throw er; // Unhandled 'error' event
      ^

Error: invalid distance too far back
    at Zlib._handle.onerror (/home/abe/play/js/browser-large-file/node_modules/browserify-zlib/lib/index.js:352:17)
    at Zlib._error (/home/abe/play/js/browser-large-file/node_modules/browserify-zlib/lib/binding.js:283:8)
    at Zlib._checkError (/home/abe/play/js/browser-large-file/node_modules/browserify-zlib/lib/binding.js:254:12)
    at Zlib._after (/home/abe/play/js/browser-large-file/node_modules/browserify-zlib/lib/binding.js:262:13)
    at /home/abe/play/js/browser-large-file/node_modules/browserify-zlib/lib/binding.js:126:10
    at process._tickCallback (internal/process/next_tick.js:150:11)

I assume this is a pako error as browserify-zlib uses pako underneath so my apologies if this is browserify-zlib error and has nothing to do with pako.

As a "control", the following code works without issue:

var fs = require("fs"),
    zlib = require("zlib");

var r = fs.createReadStream('pako-fail-test-data.txt.gz');
var z = zlib.createGunzip();

z.on("data", function(chunk) {
  process.stdout.write(chunk.toString());
});
r.pipe(z);

bgzip is used to allow for random access to gzipped files. The resulting block compressed file is a bit bigger than using straight gzip compression but the small compressed file size inflation is often worth it for the ability to efficiently access arbitrary positions in the uncompressed data.

My specific use case is that I want to process a large text file (compressed ~115Mb, ~650Mb uncompressed with other files being even larger). Loading the complete file, either compressed or uncompressed, is not an option either because of memory exhaustion or straight up memory restrictions in JavaScript. I only need to process the data in a streaming manner (that is, I only need to look at the data once and then am able to mostly discard it) so this is why I was looking into this option. The bioinformatics community uses this method quite a bit (bgzip is itself part of tabix which is part of a bioinformatics library called htslib) so it would be nice if pako supported this use case.

If there is another library I should be using to allow for stream processing of compressed data in the browser, I would welcome any suggestions.

abetusk avatar May 29 '18 13:05 abetusk

This question is a bit out of project scope, but i recommend split all to smaller steps:

  1. Drop all string conversions and make sure that pako.inflate(data) works correctly in one step.
  2. To the same in chunking mode, and don't forget to finalize with .push(data, false) last step, or try to add .push([], false) after last chunk.
  3. Add { to: 'string' } if you need utf16 output (for JS). Note, input should be still binary (it can be benary string but i don't recommend, that's for old browsers only).

Also, if you are on server size - just use node's zlib instead, pako is good for browsers only.

PS. You may wish to consider use JSzip, it's better suited for end users. This library is a bit low level thing.

puzrin avatar May 29 '18 15:05 puzrin

Hi @puzrin, thanks for the response.

I'm sorry, I don't think I've communicated the issue properly. I believe this is a bug in pako. I believe that pako does not stream decompress block compressed data properly.

I'll try and address each of your concerns below:

  1. I've only used the string conversion for illustrative purposes and this should have no effect on the buggy behavior.
  2. As I've said in the ticket above I haven't finalized the last chunk but this should have no effect on exposing the buggy behavior.
  3. As stated above, the conversion to string in the example is for illustrative purposes and should have no effect on exposing the buggy behavior.

I use the server size pako example to provide an illustrative example that exposes the buggy behavior. I understand I can use zlib for server side decompression but I ultimately want to use it in the browser. Maybe you missed it but I gave an example in the above ticket of the correct behavior in zlib that pako does not reproduce.

From what I understand, JSzip does not support gzip (as stated in issue #209) so it's not appropriate to use for this purpose (but please let me know if I'm not understanding that properly).

Here is a simpler server side program purely for illustrative purposes. I understand that this is server-side but I'm trying to expose a bug in pako so I'm providing a simple example program exposing the bug in pako. As I've said above I want this to be in browser, not server side so I can't use node's zlib library as it is only (afaik) server side. Here is the example code:

var fs = require("fs"),
    pako = require("pako");

var totalByteCount=0;

var inflator = new pako.Inflate();
inflator.onData = function(chunk) {
  totalByteCount += chunk.length;
  console.log("totalByteCount:", totalByteCount);
};

var readStream = fs.createReadStream("./pako-block-decompress-failure.txt.gz");
readStream.on("data", function(chunk) {
  inflator.push(chunk, true);
}).on("end", function() {
  inflator.push([], false);
});

Running the above code on a file that has been block compressed (pako-block-decompress-failure.txt.gz) produces the following output:

$ node pako-block-decompress-failure.js
totalByteCount: 16384
totalByteCount: 32768
totalByteCount: 49152
totalByteCount: 65280
  • The file, uncompressed is 214330 bytes long
  • pako only decompressed the first 65280 bytes
  • 65280 is less than the full file size of 214330 bytes
  • pako is not decompressing the majority of the file.

I understand that this library is most likely a volunteer effort on your part and so I would completely understand if this fell in a "don't fix" category purely for lack of interest or time but this is within scope of a zlib port.

I am unfamiliar with zlib in general so I'm not sure how much progress I could make on this but it might be helpful to me and others in the future needing this functionality if you could give some general direction on how to go about fixing this if you're unable or unwilling to look into this further.

abetusk avatar May 29 '18 16:05 abetusk

Got it (with simple example from last post). I think, problem is here https://github.com/nodeca/pako/blob/893381abcafa10fa2081ce60dae7d4d8e873a658/lib/inflate.js#L273

Let me explain. Pako consists of 2 parts:

  1. zlib port - very stable and well tested, but difficult to use directly.
  2. sugar wrappers for simple calls.

When we implemented wrappers, we could not find what to do if stream consists of multiple parts (probably returns multiple Z_STREAM_END). That's not widely used mode.

/cc @Kirill89 could you take a look?

puzrin avatar May 29 '18 17:05 puzrin

That's a minimal sample to reproduce:

const pako = require('pako');

let input = require('fs').readFileSync('./pako-block-decompress-failure.txt.gz');
let output = pako.inflate(input);

console.log(`size = ${output.length}`); // => size = 65280 !!!

puzrin avatar May 29 '18 17:05 puzrin

After quick look - seems your data really generates Z_STREAM_END status before end. Sure, wrapper can be fixed for this case, but i don't know how.

puzrin avatar May 29 '18 18:05 puzrin

Googling "zlib inflate multiple streams":

  • https://gist.github.com/benwills/356d8d1e2a2ef2a9203197b846338ec3
  • https://groups.google.com/forum/#!topic/comp.compression/UuGWViGLPOY
  • https://github.com/nodejs/node/issues/4306

puzrin avatar May 30 '18 00:05 puzrin

The relevant file (afaict) in the most current version (as of this writing) of node is node_zlib.cc (line 302):

...
        while (strm_.avail_in > 0 &&
               mode_ == GUNZIP &&
               err_ == Z_STREAM_END &&
               strm_.next_in[0] != 0x00) {
          // Bytes remain in input buffer. Perhaps this is another compressed
          // member in the same archive, or just trailing garbage.
          // Trailing zero bytes are okay, though, since they are frequently
          // used for padding.

          Reset();
          err_ = inflate(&strm_, flush_);
        }
        break;
      default:
        UNREACHABLE();
    }
...

abetusk avatar May 30 '18 11:05 abetusk

Yeah, i've seen it. Could not make quick hack to work. Seems it's better to wait for weekend, when Kirill can take a look.

puzrin avatar May 30 '18 12:05 puzrin

I checked the same file on original zlib code and found the same behavior (inflate returns Z_STREAM_END too early).

Also I found very interesting implementation of wrapper for inflate function. According to these implementation we must do inflateReset on every Z_STREAM_END instead of terminating.

This is possible fix: https://github.com/nodeca/pako/commit/c60b97e22239c02c0b5a112abbd6c6a9b5d86b45

After that fix one test becomes broken, but I don't understand why (need your help to solve).

Code to reproduce same behavior:

    // READ FILE
    size_t file_size;
    Byte *file_buf = NULL;
    uLong buf_size;

    FILE *fp = fopen("/home/Kirill/Downloads/pako-fail-test-data.txt.gz", "rb");
    fseek(fp, 0, SEEK_END);
    file_size = ftell(fp);
    rewind(fp);
    buf_size = file_size * sizeof(*file_buf);
    file_buf = malloc(buf_size);
    fread(file_buf, file_size, 1, fp);

    // INIT ZLIB
    z_stream d_stream;
    d_stream.zalloc = Z_NULL;
    d_stream.zfree = Z_NULL;
    d_stream.opaque = (voidpf)0;

    d_stream.next_in  = file_buf;
    d_stream.avail_in = (uInt)buf_size;

    int err = inflateInit2(&d_stream, 47);
    CHECK_ERR(err, "inflateInit");

    // Inflate
    uLong chunk_szie = 5000;
    Byte* chunk = malloc(chunk_szie * sizeof(Byte));

    do {
        memset(chunk, 0, chunk_szie);
        d_stream.next_out = chunk;
        d_stream.avail_out = (uInt)chunk_szie;
        err = inflate(&d_stream, Z_NO_FLUSH);
        printf("inflate(): %s\n", (char *)chunk);
        if (err == Z_STREAM_END) {
//            inflateReset(&d_stream);
            break;
        }
    } while (d_stream.avail_in);

    err = inflateEnd(&d_stream);
    CHECK_ERR(err, "inflateEnd");

Kirill89 avatar Jun 03 '18 12:06 Kirill89

@Kirill89 what about this code It should not force end in the middle (when multiple .push() callsed and some emit Z_STREAM_END before data ended).

puzrin avatar Jun 03 '18 12:06 puzrin

Any update on this issue? I'm running into this problem as well.

rbuels avatar Aug 30 '18 19:08 rbuels

https://github.com/nodeca/pako/commit/c60b97e22239c02c0b5a112abbd6c6a9b5d86b45 that needs additionas conditions update after cycle but one more test fails after that.

puzrin avatar Aug 31 '18 00:08 puzrin

my #145 pushes this a bit further, fixing (I think) the SYNC test that @puzrin saw failing, and adds 2 failing tests with @abetusk case and another case of my own. What do you guys think of that approach?

rbuels avatar Sep 01 '18 18:09 rbuels

The failing tests in that PR reproduce the "too far back" error @abetusk was seeing: image

rbuels avatar Sep 01 '18 18:09 rbuels

@rbuels look at this lines:

https://github.com/nodeca/pako/blob/c60b97e22239c02c0b5a112abbd6c6a9b5d86b45/lib/inflate.js#L279-L281

It seems, this should be removed, because Z_STREAM_END is processed inside loop and should not finalize deflate. But after removing that line, one test fails, and that's the main reason why @Kirill89 's commit was postponed.

puzrin avatar Sep 01 '18 19:09 puzrin

After discussion with @puzrin, I was able in #146 to write a couple of tests that decompress the bgzip files with pako as-is. The code for doing it can be seen at https://github.com/nodeca/pako/pull/146/files#diff-04f4959c7d84f7da8f54fbf6b0f50553R23

rbuels avatar Sep 01 '18 22:09 rbuels

Thanks for all the work on this. I just ran into this bug, and would appreciate a new release when the pull request is merged.

ewimberley avatar Sep 04 '18 18:09 ewimberley

We're seeing this issue quite a bit (I wonder if bioinformaticians just really like gzipping in blocks!)

Making the inflateReset into an inflateResetKeep call fixes the "invalid distance too far back" error, but results in the chunk remainder being written into the out buffer (which is bad). This can be fixed by moving the status === c.Z_STREAM_END condition for the strm.next_out write branch as in #146 (and I think that's the right thing to do?) The "Read stream with SYNC marks" test still fails though and I don't quite understand why either.

I put these changes up at https://github.com/onecodex/pako and I'm happy to restructure or make a PR if that's helpful (thanks for Kirill89's issue-139 branch and rbuels' #145 PR for providing 99% of the work here).

bovee avatar Feb 28 '19 18:02 bovee

@bovee I'll be happy to accept correct PR.

As far as i remember, #145 was rejected because touched original zlib content (see my comment). @Kirill89's fix was correct in general, but broken 1 strange test. This required to investigate things in debugger, but he had no free time.

I have absolutely no idea (forgot everything) what this test does, but if it exists, it can not be "just skipped". As soon as anyone can resolve this tail, PR will be accepted.

puzrin avatar Feb 28 '19 19:02 puzrin

@bovee, bgzip allows for random access to large gzip files. In bioinformatics, there's often a need to access large files efficiently and at random (from 100Mb to 5Gb or more, compressed, representing a whole genome in some format, for example). Vanilla gzip requires to decompress all previous elements before getting at some position.

By splitting the gzip file into blocks, you can create an index which can then be used to allow for efficient random access. The resulting bgzipd files are a bit bigger compressing without block (i.e. just vanilla gzip) but most of the benefits of compression are still retained while still allowing for efficient random access to the file. There's the added benefit that a bgzipd file should look like a regular gzip file so all the "standard" tools should still work to decompress it.

Here is what I believe to be the original paper by Heng Li on Tabix (Tabix has now been subsumed into htslib if I'm not mistaken).

abetusk avatar Mar 01 '19 20:03 abetusk

For the bioinformaticians in the thread, just going to say that I ended up coding around this issue and eventually releasing https://github.com/GMOD/bgzf-filehandle and https://github.com/GMOD/tabix-js for accessing BGZIP and tabix files, respectively.

On Fri, Mar 1, 2019 at 12:37 PM Abe [email protected] wrote:

@bovee https://github.com/bovee, bgzip allows for random access to large gzip files. In bioinformatics, there's often a need to access large files efficiently and at random (from 100Mb to 5Gb or more, compressed, representing a whole genome in some format, for example). Vanilla gzip requires to decompress all previous elements before getting at some position.

By splitting the gzip file into blocks, you can create an index which can then be used to allow for efficient random access. The resulting bgzipd files are a bit bigger compressing without block (i.e. just vanilla gzip) but most of the benefits of compression are still retained while still allowing for efficient random access to the file. There's the added benefit that a bgzipd file should look like a regular gzip file so all the "standard" tools should still work to decompress it.

Here is what I believe to be the original paper by Heng Li on Tabix https://academic.oup.com/bioinformatics/article/27/5/718/262743 (Tabix has now been subsumed into htslib if I'm not mistaken).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nodeca/pako/issues/139#issuecomment-468803498, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEgFS3Op1zm7Xq4vK8RQ6z9zMnEb5k4ks5vSY-JgaJpZM4URinR .

rbuels avatar Mar 01 '19 20:03 rbuels

@rbuels I'm working on reading local fastq.gz files in the browser and stumbled upon this issue. Haven't been able to get pako to work so far. Is there currently a working solution for streaming bgzf files in the browser?

EDIT: I need streaming because the files are large. I don't (and can't) need to store the entire file in memory, just need to stream through all the lines to gather some statistics.

anderspitman avatar Jan 31 '20 23:01 anderspitman

We implemented @gmod/bgzf-filehandle, which wraps pako. We use in JBrowse. https://www.npmjs.com/package/@gmod/bgzf-filehandle

On Fri, Jan 31, 2020 at 3:11 PM Anders Pitman [email protected] wrote:

@rbuels https://github.com/rbuels I'm working on reading local fastq.gz files in the browser and stumbled upon this issue. Haven't been able to get pako to work so far. Is there currently a working solution for streaming bgzf files in the browser?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nodeca/pako/issues/139?email_source=notifications&email_token=AAASAFMPEOUBVJB36WTOYNDRASV2XA5CNFSM4FCGFHI2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKQJYIQ#issuecomment-580951074, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAASAFIESY53TSQZCLRHJBLRASV2XANCNFSM4FCGFHIQ .

rbuels avatar Feb 01 '20 00:02 rbuels

pako seems to be working on my files. Not sure what I'm doing differently to not trigger this bug.

anderspitman avatar Feb 04 '20 18:02 anderspitman

https://github.com/nodeca/pako/commit/e61498ca597e885308cbe8c3c0eafcd873e78b63

I've rewritten wrappers. hose works with our old multistream fixture, and with data generated with Z_SYNC_FLUSH. But still fails with provided bgzip file, now with invalid distance too far back error :(.

UPD. Hmm... works if i vary chunkSize

puzrin avatar Nov 09 '20 18:11 puzrin

In case it helps, I have a Streams API wrapper, which I modified to support the multiple streams in a file. This supports both regular gzipped files, and bgzip ones.

Basically before pushing more data into the inflator, we check if it hit the end of stream, and rebuffer the remaining input.

class PakoInflateTransformer {
  constructor() {
    this.controller = null;
    this.decoder = new pako.Inflate();
    let self = this;
    this.decoder.onData = (chunk) => {
      self.propagateChunk(chunk);
    }
  }

  propagateChunk(chunk) {
    if (!this.controller) {
      throw "cannot propagate output chunks with no controller";
    }
    //console.log('Inflated chunk with %d bytes.', chunk.byteLength);
    this.controller.enqueue(chunk);
  }

  start(controller) {
    this.controller = controller;
  }

  transform(chunk, controller) {
    //console.log('Pako received chunk with %d bytes.', chunk.byteLength);
    this.resetIfAtEndOfStream();
    this.decoder.push(chunk);
  }

  flush() {
    //console.log('Pako flushing,');
    this.resetIfAtEndOfStream();
    this.decoder.push([], true);
  }

  resetIfAtEndOfStream() {
    // The default behaviour doesn't handle multiple streams
    // such as those produced by bgzip. If the decoder thinks
    // it has ended, but there's available input, save the
    // unused input, reset the decoder, and re-push the unused input.
    //
    while (this.decoder.ended && this.decoder.strm.avail_in > 0) {
      let strm = this.decoder.strm;
      let unused = strm.input.slice(strm.next_in);
      //console.log(`renewing the decoder with ${unused.length} bytes.`);
      this.decoder = new pako.Inflate();
      let self = this;
      this.decoder.onData = (chunk) => {
        self.propagateChunk(chunk);
      }
      this.decoder.push(unused, Z_SYNC_FLUSH);
    }
  }
}

drtconway avatar Oct 20 '21 00:10 drtconway

@drtconway, could you provide a working self contained example?

@puzrin, any progress on this?

abetusk avatar Oct 20 '21 07:10 abetusk

Here's a stand-alone HTML document. Select a file - it uses suffix matching to guess if it's compressed or uncompressed, and it reads the chunks.

<html>
 <head>
  <title>Uncompress Streams</title>
  <script src="https://cdn.jsdelivr.net/pako/1.0.3/pako.min.js"></script>
  <script src="https://unpkg.com/web-streams-polyfill/dist/polyfill.js"></script>
  <script>
    // Define Z_SYNC_FLUSH since it's not exported from pako.
    //
    const Z_SYNC_FLUSH = 2;

    class PakoInflateTransformer {
      constructor() {
        this.controller = null;
        this.decoder = new pako.Inflate();
        let self = this;
        this.decoder.onData = (chunk) => {
          self.propagateChunk(chunk);
        }
      }

      propagateChunk(chunk) {
        if (!this.controller) {
          throw "cannot propagate output chunks with no controller";
        }
        //console.log('Inflated chunk with %d bytes.', chunk.byteLength);
        this.controller.enqueue(chunk);
      }

      start(controller) {
        this.controller = controller;
      }

      transform(chunk, controller) {
        //console.log('Pako received chunk with %d bytes.', chunk.byteLength);
        this.resetIfAtEndOfStream();
        this.decoder.push(chunk);
      }

      flush() {
        //console.log('Pako flushing,');
        this.resetIfAtEndOfStream();
        this.decoder.push([], true);
      }

      resetIfAtEndOfStream() {
        // The default behaviour doesn't handle multiple streams
        // such as those produced by bgzip. If the decoder thinks
        // it has ended, but there's available input, save the
        // unused input, reset the decoder, and re-push the unused input.
        //
        while (this.decoder.ended && this.decoder.strm.avail_in > 0) {
          let strm = this.decoder.strm;
          let unused = strm.input.slice(strm.next_in);
          //console.log(`renewing the decoder with ${unused.length} bytes.`);
          this.decoder = new pako.Inflate();
          let self = this;
          this.decoder.onData = (chunk) => {
            self.propagateChunk(chunk);
          }
          this.decoder.push(unused, Z_SYNC_FLUSH);
        }
      }
    }

    function blobToReadableStream(blob) {
      let reader = blob.stream().getReader();
      return new ReadableStream({
        start(controller) {
          function push() {
            reader.read().then(({done, value}) => {
              if (done) {
                controller.close();
                return;
              }
              controller.enqueue(value);
              push();
            })
          }
          push();
        }
      });
    }

    function getReader(source) {
      var fileStream = blobToReadableStream(source);
      if (source.name.endsWith(".gz") || source.name.endsWith(".bgz")) {
        fileStream = fileStream.pipeThrough(new TransformStream(new PakoInflateTransformer));
      }
      return fileStream.getReader();
    }

    var readTheFile = async function(event) {
      var inp = event.target;
      let reader = getReader(inp.files[0]);

      let n = 0;
      let s = 0;
      while (true) {
        const { done, value } = await reader.read();
        if (done) {
          break;
        }
        let l = value.byteLength;
        n += 1;
        s += l;
      }
      let resElem = document.getElementById('results');
      let add = function(txt) {
        let para = document.createElement('p');
        resElem.appendChild(para);
        para.appendChild(document.createTextNode(txt));
      }
      let m = s / n;
      add(`number of chunks: ${n}`);
      add(`mean size: ${m}`);
    }
  </script>
 </head>
 <body>
   <div>
     <input type='file' onchange='readTheFile(event)' />
   </div>
   <div id="results">
   </div>
 </body>
</html>

drtconway avatar Oct 21 '21 00:10 drtconway

Hmm. I changed the version of pako that I was using from 1.0.3 to 2.0.4 and it fails now. Is there an obvious thing that changed that would break the code? From my initial investigation it looks like it hits the condition to reset at the end of the first stream, but the recovery doesn't work correctly any more.

drtconway avatar Oct 25 '21 21:10 drtconway

@drtconway wrapper changed significantly but multistream test exist https://github.com/nodeca/pako/blob/0398fad238edc29df44f78e338cbcfd5ee2657d3/test/gzip_specials.js#L60-L77

puzrin avatar Oct 25 '21 21:10 puzrin