Backend Development 14 min read

Comprehensive Guide to Large File Upload: Principles, Implementation, and Optimizations

This article explains the challenges of uploading large files over 100 MB, compares them with regular uploads, details the core principles and common problems, and presents complete front‑end chunking and back‑end merging implementations with resumable and instant upload techniques, plus mature solution recommendations.

JD Tech
JD Tech
JD Tech
Comprehensive Guide to Large File Upload: Principles, Implementation, and Optimizations

In web applications, uploading large files (typically >100 MB) poses significant technical challenges compared to ordinary files. This article first defines what constitutes a large file, highlights the differences in size limits and transmission speed, and outlines why standard email attachments are insufficient for such data.

Key problems when uploading large files include request timeout limits, single‑transfer size restrictions, network instability, HTTP/1.1 head‑of‑line blocking, and poor user experience due to lack of progress indication.

Front‑end solution : The file is read as binary data, split into fixed‑size chunks (e.g., 1 MB), and each chunk is uploaded individually. The process can be summarized as:

Get file ➡️ Slice ➡️ Upload

Important optimization points are:

Resumable upload (break‑point continuation)

Instant upload (skip already uploaded files)

Display upload progress

Back‑end solution : The server stores each chunk in a directory identified by a unique identifier . After all chunks are received, a mkfile endpoint merges them into the final file.

Below are the core code snippets.

// 获取identifier,同一个文件会返回相同的值
function createIdentifiert(file) {
    return file.name + file.size
}

let file = document.querySelector("[name=file]").files[0];
const LENGTH = 1024 * 1024 * 1; // 1MB
let chunks = slice(file, LENGTH);

// 获取对于同一个文件,获取其identifier
let identifier = createIdentifier(file);

let tasks = [];
chunks.forEach((chunk, index) => {
    let fd = new FormData();
    fd.append("file", chunk);
    fd.append("identifier", identifier);
    fd.append("chunkNumber", index + 1);
    fd.append("totalChunks", chunks.length);
    tasks.push(post("/mkblk.php", fd));
});

// 所有切片上传完毕后,调用mkfile接口
Promise.all(tasks).then(res => {
    let fd = new FormData();
    fd.append("identifier", identifier);
    fd.append("totalChunks", chunks.length);
    post("/mkfile.php", fd).then(res => {
        console.log(res);
    })
});
// mkblk.php接口
$identifier = $_POST['identifier'];
$path = './upload/' . $identifier;
if(!is_dir($path)){
    mkdir($path);
}
// 把同一个文件的切片放在相同的目录下
$filename = $path . '/' . $_POST['chunkNumber'];
$res = move_uploaded_file($_FILES['file']['tmp_name'], $filename);

// mkfile.php接口
$identifier = $_POST['identifier'];
$totalChunks = (int)$_POST['totalChunks'];
$filename = './upload/' . $identifier . '/file.jpg';
for($i = 1; $i <= $totalChunks; ++$i){
    $file = './upload/' . $identifier . '/' . $i;
    $content = file_get_contents($file);
    if(!file_exists($filename)){
        $fd = fopen($filename, "w+");
    }else{
        $fd = fopen($filename, "a");
    }
    fwrite($fd, $content);
}

Resumable upload is achieved by recording successfully uploaded chunk indices (e.g., in localStorage ) and skipping them on subsequent attempts. The article provides front‑end functions to get and save these records, and outlines server‑side strategies for tracking uploaded chunks.

// 获取已上传切片记录
function getUploadSliceRecord(context){
    let record = localStorage.getItem(context)
    if(!record){
        return []
    }else {
        return JSON.parse(record)
    }
}

// 保存已上传切片
function saveUploadSliceRecord(context, sliceIndex){
    let list = getUploadSliceRecord(context)
    list.push(sliceIndex)
    localStorage.setItem(context, JSON.stringify(list))
}

Instant upload ("秒传") relies on the server recognizing that a file with the same identifier already exists, allowing it to skip the upload and directly return the assembled file information.

Progress monitoring can be implemented via xhr.upload.onprogress , and upload pause/resume via xhr.abort combined with the resumable logic.

While many mature SDKs (e.g., Qiniu, Tencent Cloud) already provide these capabilities, understanding the underlying principles remains valuable. The article recommends the Vue component vue-simple-uploader (compatible with Vue 2 and Vue 3) as a ready‑made solution.

Conclusion : The article introduced the definition of large files, compared them with regular uploads, analyzed core principles and challenges, presented a simple chunked upload implementation with resumable and instant upload features, and suggested a mature Vue uploader component for production use.

backendfrontendJavaScriptPHPChunked Uploadresumable uploadlarge file upload
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.