StreamSaver.js icon indicating copy to clipboard operation
StreamSaver.js copied to clipboard

download cancelled event?

Open Spongman opened this issue 8 years ago • 74 comments

is there a way to know if the download has been cancelled by the user?

Spongman avatar Aug 17 '16 00:08 Spongman

There should be right about here https://github.com/jimmywarting/StreamSaver.js/blob/2f6c53eb63adbed503651341be2987dd1cab53b1/sw.js#L67

But it doesn't get triggered... Think it's a bug or missing Someone should report this to chromium

jimmywarting avatar Aug 17 '16 10:08 jimmywarting

yeah, thanks. that's what i figured...

i reported it here: https://github.com/slightlyoff/ServiceWorker/issues/957#issuecomment-240302130 (cc'd, here: https://bugs.chromium.org/p/chromium/issues/detail?id=638494)

Spongman avatar Aug 17 '16 20:08 Spongman

Thanks for that 👍

jimmywarting avatar Aug 17 '16 20:08 jimmywarting

Now with transferable stream there is a way to detect when the bucket strategy is full (Meaning the client paused the stream) you can also detect if user aborted the request.

jimmywarting avatar Feb 17 '19 14:02 jimmywarting

@jimmywarting

Just a heads up: Firefox already does notify the Stream passed to respondWith when the download is cancelled... This line is executed: https://github.com/jimmywarting/StreamSaver.js/blob/da6218e2a58bcc4b0ba997c53d605dd096ba02c3/sw.js#L65

TexKiller avatar Mar 03 '19 22:03 TexKiller

I consider the abort event as a minor issue and it would automatically be resolved once all browser start supporting transferable streams.

However it would be nice to solve this abort event in Firefox Will push this missing abort event for a later release

jimmywarting avatar May 28 '19 21:05 jimmywarting

Hey all, this was a "must" for me in Firefox, so here's my solution: https://github.com/jimmywarting/StreamSaver.js/pull/105

Would love feedback.

eschaefer avatar Jun 22 '19 12:06 eschaefer

Is it possible to access the abort() to do more than just console.log('user aborted')?

allengordon011 avatar Jun 01 '20 22:06 allengordon011

Now with transferable stream there is a way to detect when the bucket strategy is full (Meaning the client paused the stream) you can also detect if user aborted the request.

Hello Guys, Thanks a lot jimmywarting for the amazing work you did on this project, it's very appreciated. Unfortunately the user's events (Pause & Cancel) seem critical features in order to stop the write operations accordingly to the user intent.

Is there any news about this point ?

Thks

M0aB0z avatar Oct 28 '20 17:10 M0aB0z

sry, got some bad news.

transferable streams is still only supported in Blink with an experimental flag. https://chromestatus.com/feature/5298733486964736

2nd issue is about cancelation... chrome never emits the cancel event (here) but it can fill up the bucket to the point where it stop stops calling pull(ctrl) {...} (asking for more data) Here is the (now old) chromium bug about cancelation: https://bugs.chromium.org/p/chromium/issues/detail?id=638494 - pls star it to make it important Only FF emits this cancel event

3th issue is that streamsaver lacks the concept about buckets when talking to the service worker over MessageChannel it don't use the pull system and just eagerly enqueues more data without any respect to a bucket or the pull request. - which can lead to memory issue if enqueue data faster than what you are able to write it to the disk.


I have written a 2nd stream saving library based on native file system access that kind of acts like an adapter for different storages (such as writing to sandboxed fs, IndexedDB, cache storage and the memory) it too also comes with a adoption for writing data to the disk using same technique as StreamSaver with service worker. However my adapter dose it i slightly different and behaves more like a .pipe should with respect to cancel and only ask for more data when it needs it. it also properly reports back with a promise when data has been sent from main thread over to the service worker streams (which streamsaver totally lacks - it just resolves writer.write(data) directly) and using service worker is optional too in which case it will build up a blob in the memory and later download it using a[download] instead. I have made it optional since a few ppl wants to host the service worker themself so there is more manual work to set it up properly.

I think that in the feature native file system access will supersede FileSaver and my own StreamSaver lib when it gets more adoptions at which point i will maybe deprecate StreamSaver in favor of my 2nd file system adapter - but not yet

Maybe you would like to try it out instead?

One thing that native file system dose differently is that it can enqueue other types of data such as string, blobs and any typed array or arraybuffer - so saving a large blob/file is more beneficial since the browser don't need to read the blob.

Oh, and give this issue a 👍 as well ;)

jimmywarting avatar Oct 28 '20 20:10 jimmywarting

sry, got some bad news.

transferable streams is still only supported in Blink with an experimental flag. https://chromestatus.com/feature/5298733486964736

2nd issue is about cancelation... chrome never emits the cancel event (here) but it can fill up the bucket to the point where it stop stops calling pull(ctrl) {...} (asking for more data) Here is the (now old) chromium bug about cancelation: https://bugs.chromium.org/p/chromium/issues/detail?id=638494 - pls star it to make it important Only FF emits this cancel event

3th issue is that streamsaver lacks the concept about buckets when talking to the service worker over MessageChannel it don't use the pull system and just eagerly enqueues more data without any respect to a bucket or the pull request. - which can lead to memory issue if enqueue data faster than what you are able to write it to the disk.

I have written a 2nd stream saving library based on native file system access that kind of acts like an adapter for different storages (such as writing to sandboxed fs, IndexedDB, cache storage and the memory) it too also comes with a adoption for writing data to the disk using same technique as StreamSaver with service worker. However my adapter dose it i slightly different and behaves more like a .pipe should with respect to cancel and only ask for more data when it needs it. it also properly reports back with a promise when data has been sent from main thread over to the service worker streams (which streamsaver totally lacks - it just resolves writer.write(data) directly) and using service worker is optional too in which case it will build up a blob in the memory and later download it using a[download] instead. I have made it optional since a few ppl wants to host the service worker themself so there is more manual work to set it up properly.

I think that in the feature native file system access will supersede FileSaver and my own StreamSaver lib when it gets more adoptions at which point i will maybe deprecate StreamSaver in favor of my 2nd file system adapter - but not yet

Maybe you would like to try it out instead?

One thing that native file system dose differently is that it can enqueue other types of data such as string, blobs and any typed array or arraybuffer - so saving a large blob/file is more beneficial since the browser don't need to read the blob.

Oh, and give this issue a 👍 as well ;)

Thanks for your detailed answer, I'll have a look on your file system lib, looks very interesting and may solve my problem. Thanks again for all your quality work.

M0aB0z avatar Oct 28 '20 22:10 M0aB0z

@jimmywarting Is there a minimal, verifiable, complete example of this issue?

guest271314 avatar Nov 03 '20 15:11 guest271314

Hmm, i tried to create a minimal plunkr example here: https://plnkr.co/edit/I27Dl0chuMCuaoHD?open=lib%2Fscript.js&preview

basically wait 2s until the iframe pops up and save the never ending file download. then cancel the download from the browser UI and expect the cancel event to be called but never happens.

I'm 100% sure that this used to work in firefox but i can't get the cancel event to fire anymore in firefox. 😕 also tried my examples but i didn't get the "user aborted" console message there either.

jimmywarting avatar Nov 03 '20 18:11 jimmywarting

cancel is not an event. cancel() method is called after cancel(reason) is executed if the stream is not locked. The stream becomes locked momentarily after respondWith() is excuted. You can step through this with placement of rs.cancel()

self.addEventListener('activate', (event) => {
  event.waitUntil(clients.claim());
});

var _;
onfetch = async (evt) => {
  console.log(evt.request.url);
  if (evt.request.url.endsWith('ping')) {
    try {
      var rs = new ReadableStream({
        async start(ctrl) {
          return (_ = ctrl);
        },
        async pull() {
          _.enqueue(new Uint8Array([97]));
          await new Promise((r) => setTimeout(r, 250));
        },
        cancel(reason) {
          console.log('user aborted the download', reason);
        },
      });

      const headers = {
        'content-disposition': 'attachment; filename="filename.txt"',
      };
      var res = new Response(rs, { headers });
      // rs.cancel(0);
      evt.respondWith(res);
      // rs.cancel(0);
      setTimeout(() => {
        // rs.cancel(0);
        console.log(rs, res, _);
      }, 3000);
    } catch (e) {
      console.error(e);
    }
  }
};

console.log('que?');

sw.js:11 Uncaught (in promise) TypeError: Failed to execute 'cancel' on 'ReadableStream': Cannot cancel a locked stream at sw.js:11

sw.js:30 Uncaught (in promise) DOMException: Failed to execute 'fetch' on 'WorkerGlobalScope': The user aborted a request.
    at onfetch (https://run.plnkr.co/preview/ckh2vkij700082z6y9i3qqrz3/sw.js:30:21)

The FetchEvent for "https://run.plnkr.co/preview/ckh2vkij700082z6y9i3qqrz3/ping" resulted in a network error response: the promise was rejected.
Promise.then (async)
onfetch @ VM4 sw.js:27
VM4 sw.js:1 Uncaught (in promise) DOMException: The user aborted a request.

user aborted the download 0
run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:35 TypeError: Failed to construct 'Response': Response body object should not be disturbed or locked
    at onfetch (run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:27)
onfetch @ run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:35
VM2582 script.js:6 GET https://run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/ping 404
(anonymous) @ VM2582 script.js:6
setTimeout (async)

user aborted the download 0
The FetchEvent for "https://run.plnkr.co/preview/ckh2y9o8r000d2z6yucmc7bz4/ping" resulted in a network error response: a Response whose "bodyUsed" is "true" cannot be used to respond to a request.
Promise.then (async)
onfetch @ sw.js:28
script.js:6 GET https://run.plnkr.co/preview/ckh2y9o8r000d2z6yucmc7bz4/ping net::ERR_FAILED
(anonymous) @ script.js:6
setTimeout (async)
(anonymous) @ script.js:3
Promise.then (async)
(anonymous) @ script.js:2
TypeError: Failed to fetch
(anonymous) @ VM2582 script.js:3


sw.js:31 Uncaught (in promise) TypeError: Failed to execute 'cancel' on 'ReadableStream': Cannot cancel a locked stream
    at sw.js:31
Promise.then (async)
(anonymous) @ VM2582 script.js:2
VM2582 script.js:14 <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic">
<link rel="stylesheet" href="//unpkg.com/normalize.css/normalize.css">
<link rel="stylesheet" href="//unpkg.com/milligram/dist/milligram.min.css">
<h1>Oh dear, something didn't go quite right</h1>
<h2>Not Found</h2>

At client side AbortController can be used (see logged messaged above)

var controller, signal;
navigator.serviceWorker.register('sw.js', {scope: './'}).then(reg => {
  setTimeout(() => {
    controller = new AbortController();
    signal = controller.signal;
    fetch('./ping', {signal})
    .then(r => {
      var reader = r.body.getReader();
      reader.read().then(function process({value, done}) {
          if (done) {
            console.log(done);
            return reader.closed;
          }
          console.log(new TextDecoder().decode(value));
          return reader.read().then(process)
      })
    })
    .catch(console.error)

    document.querySelector('h1')
    .onclick = e => controller.abort();

  }, 2000)
})

See also the code at Is it possible to write to WebAssembly.Memory in PHP that is exported to and read in JavaScript in parallel? at background.js in an extension, where we stream raw PCM audio (without a definitive end) via fetch() from php passthru() and stop the stream using abort(), which can be achieved at Chromium using QuicTransport https://github.com/guest271314/quictransport without Native Messaging.

const id = 'native_messaging_stream';
let externalPort, controller, signal;

chrome.runtime.onConnectExternal.addListener(port => {
  console.log(port);
  externalPort = port;
  externalPort.onMessage.addListener(message => {
    if (message === 'start') {
      chrome.runtime.sendNativeMessage(id, {}, async _ => {
        console.log(_);
        if (chrome.runtime.lastError) {
          console.warn(chrome.runtime.lastError.message);
        }
        controller = new AbortController();
        signal = controller.signal;
        // wait until bash script completes, server starts
        for await (const _ of (async function* stream() {
          while (true) {
            try {
              if ((await fetch('http://localhost:8000', { method: 'HEAD' })).ok)
                break;
            } catch (e) {
              console.warn(e.message);
              yield;
            }
          }
        })());
        try {
          const response = await fetch('http://localhost:8000?start=true', {
            cache: 'no-store',
            mode: 'cors',
            method: 'get',
            signal
          });
          console.log(...response.headers);
          const readable = response.body;
          readable
            .pipeTo(
              new WritableStream({
                write: async value => {
                  // value is a Uint8Array, postMessage() here only supports cloning, not transfer
                  externalPort.postMessage(JSON.stringify(value));
                },
              })
            )
            .catch(err => {
              console.warn(err);
              externalPort.postMessage('done');
            });
        } catch (err) {
          console.error(err);
        }
      });
    }
    if (message === 'stop') {
      controller.abort();
      chrome.runtime.sendNativeMessage(id, {}, _ => {
        if (chrome.runtime.lastError) {
          console.warn(chrome.runtime.lastError.message);
        }
        console.log('everything should be done');
      });
    }
  });
});

ServiceWorker does not appear to be well-suited for the task. We can stream the file without ServiceWorker using fetch() see this answer at How to solve Uncaught RangeError when download large size json where successfully streamed and downloaded a 189MB file, and, as you indicated, use File System Access, something like

(async() =>  {
  const dir = await showDirectoryPicker();
  const status = await dir.requestPermission({mode: 'readwrite'});
  const url = 'https://fetch-stream-audio.anthum.com/72kbps/opus/house--64kbs.opus?cacheBust=1';
  const handle = await dir.getFile('house--64kbs.opus', { create: true });
  const wfs = await handle.createWritable();
  const response = await fetch(url);
  const body = await response.body;
  console.log("starting write");
  await body.pipeTo(wfs, { preventCancel: true });
  const file = await (await dir.getFile('house--64kbs.opus')).getFile();
  console.log(file);
})();

(BTW, created several screenshot workarounds, two of which are published at the linked repository https://gist.github.com/guest271314/13739f7b0343d6403058c3dbca4f5580)

guest271314 avatar Nov 04 '20 05:11 guest271314

cancel is not an event.

didn't know what to call it, it is kind of like an event that happens when it get aborted by the user... but whatever

ServiceWorker does not appear to be well-suited for the task. We can stream the file without ServiceWorker using fetch() see this answer at How to solve Uncaught RangeError when download large size json where successfully streamed and downloaded a 189MB file, and, as you indicated, use File System Access, something like

I know service worker isn't the best solution but it's currently the only/best client side solution at the moment until native file system access becomes more wildly adapted in more browser without a experimental flag. it too comes with its drawback

  • it lacks support for suggested filename, so you are required to ask for directory and write the file yourself - or you can let the user choose the name.
  • it isn't associated with any native browser UI element where you can see the progress and cancel the download

I'm using service worker to mimic a normal download that occur from downloading something from a server so that i don't have to build a Blob in the memory and later download the hole file at once as it's wasteful use of memory when downloading large files. + there is better ways to solve that 4y old issue if he just did response.blob() and hope that browser offload large blob to the disk instead. ( see Chrome's Blob Storage System Design ) or if he really needed a json response call response.json() it seems to be much more performant to just do new Response(str).json().then(...) instead of JSON.parse(str)

And as always use the server to download the file if it comes from the cloud if you can or download the file without fetch and download it directly without blob and fetch

var a = document.createElement("a")
a.download = "citylots.json"
// mostly only work for same origin
a.href = "/citylots.json"
document.body.appendChild(a)
a.click()

jimmywarting avatar Nov 04 '20 11:11 jimmywarting

For this specific issue you can rearrage the placement of cancel(reason) to get the reason 0, at cancel (reason) {} method.

AbortController client side a MessagePort can be utilized to send the messsage to ServiceWorker to cancel the stream - before the stream is locked. I'm not seeing releaseLock() defined at Chromium 88. Either way the data is read into memory. If you fetch() client side you can precisely count progress and abort the request - then just doenload the file. Using Native Messaging, which is available at Chromium and Firefox you can write the file or files directly to disk at a shell, or combination of browser and shell.

I'm sure we could build a custom HTML element and implement progress events, either by estimation https://stackoverflow.com/a/41215449 or counting every byte How to read and echo file size of uploaded file being written at server in real time without blocking at both server and client?.

guest271314 avatar Nov 04 '20 12:11 guest271314

OS froze at plnkr during tests. Registering and un-registering ServiceWorkers. We should be able to achieve something similar to what is described here. Will continue testing.

guest271314 avatar Nov 04 '20 14:11 guest271314

The plnkr throws error at Nightly 84

Failed to register/update a ServiceWorker for scope ‘https://run.plnkr.co/preview/ckh4cy0qq00071w6pnxnhpr3v/’: 
Storage access is restricted in this context due to user settings or private browsing mode. 
script.js:1:24
Uncaught (in promise) DOMException: The operation is insecure.

Some observations running the below code https://plnkr.co/edit/P2op0uo5YBA5eEEm?open=lib%2Fscript.js at Chromium 88, which might not be the exact requirement, though is a start and extensible.

  • AbortController does not cancel the ReadableStream passed to Response
  • Download Cancel UI does not communicate messages to ServiceWorker or vice versa
  • Once Response is passed to respondWith() the ReadableStream is locked - AFAICT there does not appear to be any way to unlock the stream for the purpose of calling cancel() without an error being thrown

Utilizing clinet-side code we can get progress of bytes enqueed, post messages containing download status to main thread using MessageChannel or BroadcastChannel, and call ReadableStreamDefaultController.close() and AbortController.abort() when the appropirate message is received from client document.

index.html

<!DOCTYPE html>

<html>
  <head>
    <script src="lib/script.js"></script>
  </head>

  <body>
    <button id="start">Start download</button>

    <button id="abort">Abort download</button>
  </body>
</html>

lib/script.js

const unregisterServiceWorkers = async (_) => {
  const registrations = await navigator.serviceWorker.getRegistrations();
  for (const registration of registrations) {
    console.log(registration);
    try {
      await registration.unregister();
    } catch (e) {
      throw e;
    }
  }
  return `ServiceWorker's unregistered`;
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  console.log(e.data);
  if (e.data.aborted) {
    unregisterServiceWorkers()
      .then((_) => {
        console.log(_);
        bc.close();
      })
      .catch(console.error);
  }
};

onload = (_) => {
  document.querySelector('#abort').onclick = (_) =>
    bc.postMessage({ abort: true });

  document.querySelector('#start').onclick = (_) => {
    const iframe = document.createElement('iframe');
    iframe.src = './ping';
    document.body.append(iframe);
  };
};

navigator.serviceWorker.register('sw.js', { scope: './' }).then((reg) => {});

sw.js

self.addEventListener('activate', (event) => {
  event.waitUntil(clients.claim());
});

let rs;

let bytes = 0;

let n = 0;

let abort = false;

let aborted = false;

const controller = new AbortController();

const signal = controller.signal;

signal.onabort = (e) => {
  try {
    console.log(e);
    console.log(source, controller, rs);
    ({ aborted } = e.currentTarget);
    bc.postMessage({ aborted });
  } catch (e) {
    console.error(e);
  }
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  if (e.data.abort) {
    abort = true;
  }
};

const source = {
  controller: new AbortController(),
  start: async (ctrl) => {
    console.log('starting download');
    return;
  },
  pull: async (ctrl) => {
    ++n;
    if (abort) {
      ctrl.close();
      controller.abort();
    } else {
      const data = new TextEncoder().encode(n + '\n');
      bytes += data.buffer.byteLength;
      ctrl.enqueue(data);
      bc.postMessage({ bytes, aborted });
      await new Promise((r) => setTimeout(r, 50));
    }
  },
  cancel: (reason) => {
    console.log('user aborted the download', reason);
  },
};

onfetch = (evt) => {
  console.log(evt.request);

  if (evt.request.url.endsWith('ping')) {
    rs = new ReadableStream(source);
    const headers = {
      'content-disposition': 'attachment; filename="filename.txt"',
    };

    const res = new Response(rs, { headers, signal });
    console.log(controller, res);

    evt.respondWith(res);
  }
};

console.log('que?');

guest271314 avatar Nov 05 '20 08:11 guest271314

Once the ReadableStream is passed to Response the stream is locked and AFAICT cannot be cancelled, thus await cancel(reason) will throw error and cancel(reason) {console.log(reason)} will not be executed.

Response actually does not expect a signal property per https://bugs.chromium.org/p/chromium/issues/detail?id=823697#c14

You may have included a signal attribute in your Response constructor options dictionary, but its not read. The spec only supports adding a signal to the Request.

Also, its not clear what the signal on a Response would accomplish. If you want to abort the Response body you can just error the body stream, no?

pipeTo() and pipeThrough() do expect optional signal properties https://streams.spec.whatwg.org/#ref-for-rs-pipe-to%E2%91%A1.

Firefox does not support pipeTo() and pipeThrough(). We need to adjust the code to branch at a condition, e.g., 'pipeTo' in readable, then utilize only getReader() and read() instead of AbortController with WritableStream, which is still behind a flag at Nightly 84.

We tee() a ReadableStream to read bytes and wait for abort signal or message to cancel the download by cancelling or closing all streams, initial and derived tee'd pairs. If the paired stream is not aborted the unlocked pair is passed to Response

Tested several hundred runs at Chromium 88 to derive the current working example that still requires independent verification. The main issue that encountered when testing is ServiceWorker "life-cycle", or ServiceWorkers that remain after page reload, and re-run code that has changed; determining exactly when all service workers are unregistered; storage messages; inconsistent behaviour between reloads of the tab.

Running the code at Firefox or Nightly at localhost logs exception

Failed to get service worker registration(s): 
Storage access is restricted in this context 
due to user settings or private browsing mode. script.js:2:54
Uncaught (in promise) DOMException: The operation is insecure. script.js:2

Have not yet successfully run the code at Mozilla browsers. The working Chromium version provides a template of how the code can work at Firefox, given similar implementations and support.

From what can gather from the entirety of the issue this is resulting interpretation of a potential solution to handle both aborting the download and notifying the client of the state of the download. Kindly verify the code produces the expected output and handles the use cases described based on own interpreation of the issue, above.

index.html

<!DOCTYPE html>

<html>
  <head>
    <script src="lib/script.js"></script>
  </head>

  <body>
    <button id="start">Start download</button>

    <button id="abort">Abort download</button>
  </body>
</html>

lib/script.js

const unregisterServiceWorkers = async (_) => {
  const registrations = await navigator.serviceWorker.getRegistrations();
  for (const registration of registrations) {
    console.log(registration);
    try {
      await registration.unregister();
    } catch (e) {
      throw e;
    }
  }
  return `ServiceWorker's unregistered`;
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  console.log(e.data);
};

onload = async (_) => {
  console.log(await unregisterServiceWorkers());

  document.querySelector('#abort').onclick = (_) =>
    bc.postMessage({ abort: true });

  document.querySelector('#start').onclick = async (_) => {
    console.log(await unregisterServiceWorkers());
    console.log(
      await navigator.serviceWorker.register('sw.js', { scope: './' })
    );
    let node = document.querySelector('iframe');
    if (node) document.body.removeChild(node);
    const iframe = document.createElement('iframe');
    iframe.onload = async (e) => {
      console.log(e);
    };
    document.body.append(iframe);
    iframe.src = './ping';
  };
};

sw.js

// https://stackoverflow.com/a/34046299
self.addEventListener('install', (event) => {
  // Bypass the waiting lifecycle stage,
  // just in case there's an older version of this SW registration.
  event.waitUntil(self.skipWaiting());
});

self.addEventListener('activate', (event) => {
  // Take control of all pages under this SW's scope immediately,
  // instead of waiting for reload/navigation.
  event.waitUntil(self.clients.claim());
});

self.addEventListener('fetch', (event) => {
  console.log(event.request);

  if (event.request.url.endsWith('ping')) {
    var encoder = new TextEncoder();

    var bytes = 0;

    var n = 0;

    var abort = false;

    let aborted = false;

    var res;

    const bc = new BroadcastChannel('downloads');

    bc.onmessage = (e) => {
      console.log(e.data);
      if (e.data.abort) {
        abort = true;
      }
    };

    var controller = new AbortController();
    var signal = controller.signal;
    console.log(controller, signal);
    signal.onabort = (e) => {
      console.log(
        `Event type:${e.type}\nEvent target:${e.target.constructor.name}`
      );
    };
    var readable = new ReadableStream({
      async pull(c) {
        if (n === 10 && !abort) {
          c.close();
          return;
        }
        const data = encoder.encode(n + '\n');
        bytes += data.buffer.byteLength;
        c.enqueue(data);
        bc.postMessage({ bytes, aborted });
        await new Promise((r) => setTimeout(r, 1000));
        ++n;
      },
      cancel(reason) {
        console.log(
          `readable cancel(reason):${reason.join(
            '\n'
          )}\nreadable ReadableStream.locked:${readable.locked}\na locked:${
            a.locked
          }\nb.locked:${b.locked}`
        );
      },
    });

    var [a, b] = readable.tee();
    console.log({ readable, a, b });

    async function cancelable() {
      if ('pipeTo' in b) {
        var writeable = new WritableStream({
          async write(v, c) {
            console.log(v);
            if (abort) {
              controller.abort();
              try {
                console.log(await a.cancel('Download aborted!'));
              } catch (e) {
                console.error(e);
              }
            }
          },
          abort(reason) {
            console.log(
              `abort(reason):${reason}\nWritableStream.locked:${writeable.locked}`
            );
          },
        });
        return b
          .pipeTo(writeable, { preventCancel: false, signal })
          .catch((e) => {
            console.log(
              `catch(e):${e}\nReadableStream.locked:${readable.locked}\nWritableStream.locked:${writeable.locked}`
            );
            bc.postMessage({ aborted: true });
            return 'Download aborted.';
          });
      } else {
        var reader = b.getReader();
        return reader.read().then(async function process({ value, done }) {
          if (done) {
            if (abort) {
              reader.releaseLock();
              reader.cancel();
              console.log(await a.cancel('Download aborted!'));
              bc.postMessage({ aborted: true });
            }
            return reader.closed.then((_) => 'Download aborted.');
          }

          return reader.read().then(process).catch(console.error);
        });
      }
    }

    var downloadable = cancelable().then((result) => {
      console.log({ result });
      const headers = {
        'content-disposition': 'attachment; filename="filename.txt"',
      };
      try {
        bc.postMessage({ done: true });
        bc.close();
        res = new Response(a, { headers, cache: 'no-store' });
        console.log(res);
        return res;
      } catch (e) {
        console.error(e);
      } finally {
        console.assert(res, { res });
      }
    });

    evt.respondWith(downloadable);
  }
});

console.log('que?');

Updated plnkr https://plnkr.co/edit/P2op0uo5YBA5eEEm

guest271314 avatar Nov 08 '20 18:11 guest271314

I have another older (original) idea in mind also that didn't work earlier. Blink (v85) have recently gotten support for streaming upload. Example:

await fetch('https://httpbin.org/post', {
  method: 'POST',
  body: new ReadableStream({
    start(ctrl) {
      ctrl.enqueue(new Uint8Array([97])) // a
      ctrl.close()
    }
  })
}).then(r => r.json()).then(j=>j.data) // "a"

None of the other browser supports it yet. but it can simplify stuff quite a bit. You can just echo back everything and pipe the the download iframe and the ajax request to eachother

// oversimplified (you need two fetch events for this to work)
// canceling the ajax with a AbortSignal or interrupt the readableStream from the main thread can abort the download (aka: iframeEvent).
// canceling the downloading iframe body (from browser UI) can abort the ajax (aka: ajaxEvent)

iframeEvent.respondWith(new Response(ajaxEvent.request.body, { headers: ajaxEvent.request.headers }))
ajaxEvent.respondWith(new Response(iframeEvent.request.body))
  • You don't need any MessageChannel to transfer chunks (means less overhead)
  • It's tighter coupled as a writable stream pipeline should be (with the bucket) (StreamSaver currently lacks any backpressure algorithm) writable.write(chunk) just resolves directly
  • you don't need to ping the service worker to keep it alive since it don't have to do any more work.

jimmywarting avatar Nov 08 '20 20:11 jimmywarting

if ('pipeTo' in b) {

is insufficient, should check for both pipeTo and WirtableStream support (Safari have pipeTo but no WritableStream)

Other thing I don't like is that you use BroadCast channel. one download should have one own dedicated MessageChannel - think of it as downloading two files at the same time... aborting one of them should not abort both or the other (but i get it if it's just for testing (PoC) purpose)

jimmywarting avatar Nov 08 '20 20:11 jimmywarting

Yes, the code is proof-of-concept. Does the code embody what you are trying to achieve? I first tried messaging with navigator.serverWorker.controller yet could not get the messages to signal back and forth, used BroadcastChannel as a template for MessageChannel usage.

Firefox has some form of bug that might have to do with ServiceWorker not recognizing iframe as a destination, therefore fetch event is not caught for iframe.src = 'destination_that_does_not_exist', instead 404 response is returned which could lead to BroadcastChannel surviving page reload, just narrowed that down and have a prospective fix for the page-reloaded BroadcastChannel messages surviving https://bugzilla.mozilla.org/show_bug.cgi?id=1676043 if (n === 10 || abort) at the reader that is the writer for the download content; just started reading https://github.com/whatwg/fetch/pull/948, which might provide some useful information; though do not have a solution for Firefox loading the same page at localhost or 404 at plnkr for the request ./ping.

Will check how useful streaming upload is for this case. That has been a limitation for expansion of use cases.

From perspective here Chromium version achieves what is described here, using both WritableStream with pipeTo() and AbortController and only ReadableStreams with tee().

Does the example code achieve the expected result for you at Chromium re notifications of aborted request and download at this issue?

guest271314 avatar Nov 08 '20 21:11 guest271314

Question: Why not just write to a Blob or File at main thread, instead of using ServiceWorker at all, then download?

guest271314 avatar Nov 08 '20 21:11 guest271314

Just tried the plnkr again at Firefox 82. The download was successful initially, and aborted the download, then began throwing error. The issue could be relaed to storage.

guest271314 avatar Nov 09 '20 03:11 guest271314

I’m not sure if this is still the case but this issue’s use-case originally had the ability to navigate away from the page while the download continued. Also the downloads I was doing were significantly larger than RAM.

Spongman avatar Nov 09 '20 04:11 Spongman

I’m not sure if this is still the case but this issue’s use-case originally had the ability to navigate away from the page while the download continued. Also the downloads I was doing were significantly larger than RAM.

Navigating away from the page while download continues appears to be different from knowing if the user cancels a download.

If the download is larger than RAM what is the expected result once RAM capacity is reached by either direct or peripheral correlation with the download process?

Drag and drop the completed zipped folder?

guest271314 avatar Nov 09 '20 15:11 guest271314

@jimmywarting The fetch() with ReadableStream example requires a HTTP/3 server. Tested downloading using QuicTransport briefly yesterday; works to an appreciable degree, yet because we are sending streams there was an issue of the Python code that used with a bash script

#!/bin/bash
echo "$1" | cat > written
echo 'ok'

only writing the last part of a larger input stream to the file. Still need to test more with that API.

guest271314 avatar Nov 09 '20 15:11 guest271314

I think you misunderstand. The use case worked - navigating away AND huge downloads. The only thing that didn’t was the worker wasn’t notified if the download was cancelled.

Spongman avatar Nov 09 '20 17:11 Spongman

Is cancelation of download at browser UI specified? Specifications tend to stay away from mandating exact browser UI.

If the code is in control of the input ReadableStream then when close() is called from inside start() or pull() then you can notify the user at the next step. Once the stream is passed to Response the stream is locked, await cancel(reason) will throw error and cancel(reason) {} will not be called. If you supply buttons or a programmic means to the user to communicate with the ServiceWorker and tee() the download stream, you can cancel the paired stream and return the unlocked stream, which will either be the content to download of a disturbed or closed stream if you also close that paired stream. That works at Chromium due to pipeTo() with AbortController signal support. At Firefo the download is still canceled, you just need to keep the BroadcastChannel or other means of communication active long enough to signal the main thread before unregistering the ServiceWorker which is the most fragile part of the process, as we do not want ServiceWorkers remaining active after the process is complete.

guest271314 avatar Nov 10 '20 03:11 guest271314

One question I have is what is the expected result of canceling the download as to downloaded content? Download partial content or do not download anything?

guest271314 avatar Nov 10 '20 03:11 guest271314