jszip icon indicating copy to clipboard operation
jszip copied to clipboard

Zipping 1GB+ and splitting to chunks is slow - is there a way to speed it up?

Open alexandis opened this issue 1 year ago • 0 comments

I see no possibilities to create a multi-volume archive, so I ended up with creating a bunch of chunks for uploading to the server (and then combine them and unzip there).

Here is the code (I'm uploading the files which names contain subfolders (e.g. '/subfolder/subsubfolder/filename.ext') - I need to recreate the structure of folders on server):

  uploadCases(files: File[], equipmentCode: string, chunkId: string): Observable<ChunkDto> {
    return new Observable<ChunkDto>(observer => {
      this.zip = new JsZip();
  
      files.forEach(file => {
        this.zip.file(file.name.replace(/^\/+/, ''), file, { dir: false, createFolders: true });
      });
  
      const chunks: Blob[] = [];
  
      this.zip.generateAsync({ type: 'arraybuffer', streamFiles: true, compression: 'STORE' }).then((arrayBuffer: ArrayBuffer) => {
        const totalSize = arrayBuffer.byteLength;
        let offset = 0;
        while (offset < totalSize) {
          const chunkSize = Math.min(totalSize - offset, maxChunkBytes);
          const chunk = new Blob([arrayBuffer.slice(offset, offset + chunkSize)], { type: 'application/zip' });
          chunks.push(chunk);
          offset += chunkSize;
        }
  
        const uploadChunk = (chunk: Blob, equipmentCode: string, chunkIndex: number): Observable<ChunkDto> => {
          const formData = new FormData();
          formData.append('chunk', chunk);
          formData.append('equipmentCode', equipmentCode);
          formData.append('chunkIdWithIndex', `${chunkId}_${chunkIndex}`);
          return this.restService.request<FormData, HttpEvent<ChunkDto>>({
            method: 'POST',
            url: '/cases/upload',
            body: formData,
            reportProgress: true
          },
          { apiName: this.apiName, skipHandleError: true, observe: Rest.Observe.Events })
          .pipe(
              map(event => {
                if (event.type === HttpEventType.UploadProgress) {
                  return { uploadedBytes: (event as HttpProgressEvent).loaded, totalBytes: chunk.size, complete: false, id: chunkId } as ChunkDto;
                }
                else if (event.type === HttpEventType.Response) {
                  return { uploadedBytes: chunk.size, totalBytes: chunk.size, complete: true, id: chunkId } as ChunkDto;
                }
                else {
                  return { uploadedBytes: 0, totalBytes: 0, complete: false, id: '' } as ChunkDto;
                }
              })
          );
        }
  
        // Sequentially upload each chunk
        concat(...chunks.map((chunk, index) => uploadChunk(chunk, equipmentCode, index))).subscribe(observer);
      });
    });
  }

However, it is pretty slow on large archives. Probably I am doing something wrong and there's a way to speed it up?

alexandis avatar Jan 27 '24 19:01 alexandis