Frontend Development 12 min read

Chunked File Upload with Proxy Control and Web Worker in Frontend Development

This article explains how to split large files into chunks, control concurrent uploads with a Proxy‑based queue, and offload the upload process to a Web Worker to improve reliability and keep the main thread responsive, including practical code examples and bundling tips.

Beike Product & Technology
Beike Product & Technology
Beike Product & Technology
Chunked File Upload with Proxy Control and Web Worker in Frontend Development

When uploading large files in the browser, sending the whole file in a single XHR request often leads to long transfer times and a high failure probability. The recommended solution is to slice the file into smaller chunks and upload them sequentially or in controlled parallel batches.

Using an input element with multiple provides a FileList . Each File inherits from Blob , so the slice method can be used to create equal‑sized pieces (the last piece may be smaller). The following function splits a file into chunks:

const FILE_PER_PICE_SIZE = 1024 * 1024 * 5;

const splitFile = (file, pieceSize) => {
  let start = 0;
  let end;
  let index = 0;
  const { size = 0 } = file || {};
  if (pieceSize < FILE_PER_PICE_SIZE) {
    pieceSize = FILE_PER_PICE_SIZE;
  }
  const totalPieces = Math.floor(size / pieceSize);
  const chucks = [];
  while (start < size) {
    end = start + pieceSize;
    if (end > size) {
      end = size;
    }
    if (index === totalPieces - 1) {
      chucks.push({ chuck: file.slice(start), index: index + 1 });
      break;
    } else {
      chucks.push({ chuck: file.slice(start, end), index: index + 1 });
      start = end;
      index++;
    }
  }
  return {
    total: chucks.length,
    chucks: chucks.length === 1 ? [file] : chucks,
  };
};

After splitting, each chunk must be uploaded successfully. A straightforward way is to use Promise.all to fire all chunk requests in parallel, but browsers limit the number of simultaneous connections per domain (e.g., Chrome allows only six). Sending all chunks at once can cause many requests to stay in a pending state, leading to time‑outs and blocking other resources.

To limit concurrency, a Proxy‑based uploader is introduced. The Proxy tracks completed, failed, and pending pieces, sending a new piece only when a slot becomes free. The core implementation looks like this:

const createUploaderProxy = (cb) => {
  const tmp = Object.create(null);
  tmp.failed = [];
  tmp.done = [];
  tmp.pieceList = [];
  tmp.multiConfig = Object.create(null);
  tmp.total = 0;
  return new Proxy({ ...tmp }, {
    set(target, prop, value, receiver) {
      if (prop === "done" || prop === "failed") {
        if (Array.isArray(value) && !value.length) {
          target[prop] = value;
          return true;
        }
        target[prop].push(value);
        if (target.pieceList.length) {
          const next = target.pieceList.shift();
          uploader.singlePiece(next, target.multiConfig, receiver);
          return true;
        }
        if (target.failed.length + target.done.length === target.total) {
          const fName = target.name;
          if (target.done.length === target.total) {
            checkUploadStatus(target.done, target.multiConfig)
              .then((url) => {
                cb(url, true);
                uploader.files[fName] = url;
              })
              .catch(() => {
                cb(fName, false);
              });
          } else {
            cb(fName, false);
          }
        }
        return true;
      }
      target[prop] = value;
      return true;
    },
  });
};

const uploader = Object.create(null);
uploader.files = Object.create(null);

uploader.load = (file, config = { accepts: ["video/mp4", "video/ogg", "video/webm", "video/quicktime"], pieceSize: 1024 * 1024 * 5, waterFlow: 6 }, cb = () => {}) => {
  const { name } = file;
  const fileUploader = createUploaderProxy(cb);
  uploader.files[name] = { config, cb, uploader: fileUploader };
};

uploader.singlePiece = (piece, fConfig, fileUploader) => {
  upload(piece, fConfig)
    .then((ret) => {
      fileUploader.done = ret;
    })
    .catch(() => {
      fileUploader.failed = piece;
    });
};

To further isolate the heavy upload work from the UI thread, the same logic is moved into a Web Worker. Workers cannot access the DOM or window , but they can perform network requests. The worker version mirrors the Proxy uploader and communicates results back to the main thread via postMessage :

/**
 * worker.js
 */
const createUploaderProxy = () => {
  const tmp = Object.create(null);
  tmp.failed = [];
  tmp.done = [];
  tmp.pieceList = [];
  tmp.multiConfig = Object.create(null);
  tmp.total = 0;
  return new Proxy({ ...tmp }, {
    set(target, prop, value, receiver) {
      if (prop === "done" || prop === "failed") {
        if (Array.isArray(value) && !value.length) {
          target[prop] = value;
          return true;
        }
        target[prop].push(value);
        if (target.pieceList.length) {
          const next = target.pieceList.shift();
          uploader.singlePiece(next, target.multiConfig, receiver);
          return true;
        }
        if (target.failed.length + target.done.length === target.total) {
          const fName = target.name;
          if (target.done.length === target.total) {
            checkUploadStatus(target.done, target.multiConfig)
              .then((url) => {
                postMessage({ url, success: true });
                uploader.files[fName] = url;
              })
              .catch(() => {
                postMessage({ name: fName, success: false });
              });
          } else {
            postMessage({ name: fName, success: false });
          }
        }
        return true;
      }
      target[prop] = value;
      return true;
    },
  });
};

const uploader = Object.create(null);
uploader.files = Object.create(null);

uploader.load = (file, config = { accepts: ["video/mp4", "video/ogg", "video/webm", "video/quicktime"], pieceSize: 1024 * 1024 * 5, waterFlow: 6 }) => {
  const { name } = file;
  const fileUploader = createUploaderProxy();
  uploader.files[name] = { config, uploader: fileUploader };
};

uploader.singlePiece = (piece, fConfig, fileUploader) => {
  upload(piece, fConfig)
    .then((ret) => {
      fileUploader.done = ret;
    })
    .catch(() => {
      fileUploader.failed = piece;
    });
};

onmessage = (event) => {
  const { data: { isFile, file } = {} } = event;
  uploader.load(file);
};

In the main thread, the worker is instantiated and its messages are handled:

if (window.Worker) {
  // create worker
  window.uploadWorker = new Worker("./worker.js");
  // listen for results
  window.uploadWorker.onmessage = (event) => {
    console.log(event.data);
  };
}

// input change handler
const onChange = (event) => {
  const { file } = event.target;
  window.uploadWorker.postMessage({ file, isFile: true });
};

Because a worker must be served from the same origin, the article recommends using worker-plugin (or similar) to bundle the worker as a separate file during the build process. An example of importing the plugin in a Webpack configuration is shown:

import "worker-plugin/loader?name=upload!./uploadWorker.js";

When the project also contains a Node.js backend, the compiled worker can be linked into the server’s static directory for easier debugging:

ln client/src/pages/videoUpload/uploadWorker.js /server/src/static/worker.js

Summary : By slicing large files, limiting concurrent uploads with a Proxy‑based queue, and moving the heavy upload logic into a Web Worker, the solution avoids long‑lasting pending requests, prevents main‑thread blockage, and improves overall upload reliability and user experience.

frontendjavascriptproxyFile Uploadweb workerChunked Upload
Beike Product & Technology
Written by

Beike Product & Technology

As Beike's official product and technology account, we are committed to building a platform for sharing Beike's product and technology insights, targeting internet/O2O developers and product professionals. We share high-quality original articles, tech salon events, and recruitment information weekly. Welcome to follow us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.