Frontend Development 10 min read

Best Practices for Large File Upload with Chunking, Hashing, and Concurrent Requests

This article explains how to efficiently upload large files (e.g., 1.5 GB) from the browser by calculating a file hash with spark‑md5, checking server status for instant or resumable uploads, slicing the file into chunks, uploading them concurrently, and finally merging the chunks, while also offering optimization tips such as using Web Workers and adjusting chunk size.

JD Tech
JD Tech
JD Tech
Best Practices for Large File Upload with Chunking, Hashing, and Concurrent Requests

In the era of the internet, uploading large files has become a common requirement; this article shares practical experience for efficiently handling large file uploads from a front‑end page to an OSS backend.

The main goals are to quickly upload a 1.5 GB file, support breakpoint‑resume when the network is poor, and enable instant (秒传) uploads when the same file already exists on the server.

First, the file’s content hash is calculated using spark-md5 to uniquely identify the file. The hash is sent to the server to query the file’s storage status, which can be “not uploaded”, “partially uploaded”, or “already uploaded”. Based on the response, different upload strategies are chosen.

Hash calculation and file slicing are implemented as follows: import SparkMD5 from 'spark-md5' const CHUNK_SIZE = 1024 * 1024 * 5 // 5M function sliceFile2Chunk(file) { /* ... */ } function getFileHash(file) { /* ... */ }

After receiving the server status, a list of chunks that still need to be uploaded is built: function createWait2UploadChunks(res) { /* ... */ } function formateChunk(item, index) { /* ... */ }

Chunks are uploaded concurrently with a maximum of five HTTP requests. Progress is tracked and, once all chunks are uploaded, a merge request is sent to the server to assemble the file: async function uploadFileChunk(chunkFormData) { /* ... */ } async function mergeChunks() { /* ... */ }

Optimizations include using a Web Worker to compute the hash without blocking the UI, adjusting the chunk size (e.g., 5 MB) according to network bandwidth and server resources, and exploring multi‑client simultaneous uploads to further reduce total upload time.

The article concludes that by splitting a large file into smaller chunks, uploading them concurrently, and leveraging hash‑based status checks, developers can achieve fast, reliable, and resumable large file uploads.

frontendconcurrencyWeb WorkerHashingChunked Uploadlarge file uploadResume Upload
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.