Large File Upload, Chunked Upload, Resume and Instant Upload with Spring Boot and JavaScript
This article demonstrates how to implement small file uploads, large file chunked uploads, breakpoint resume, and instant upload using Spring Boot 3.1.2 on the backend and native JavaScript with spark‑md5 on the frontend, covering configuration, code examples, and practical considerations.
File upload is a common requirement in web projects; while small files can be handled with a simple form, large files (e.g., >1 GB) or slow networks need more robust solutions such as chunked upload, breakpoint resume, and instant upload.
Small File Upload
The backend uses Spring Boot 3.1.2 with JDK 17, and the frontend relies on plain JavaScript and spark‑md5.min.js . The Maven pom.xml configuration is shown below:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.1.2</version>
</parent>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>The Java controller for handling a single file upload:
@RestController
public class UploadController {
public static final String UPLOAD_PATH = "D:\\upload\\";
@RequestMapping("/upload")
public ResponseEntity<Map<String, String>> upload(@RequestParam MultipartFile file) throws IOException {
File dstFile = new File(UPLOAD_PATH, String.format("%s.%s", UUID.randomUUID(), StringUtils.getFilename(file.getOriginalFilename())));
file.transferTo(dstFile);
return ResponseEntity.ok(Map.of("path", dstFile.getAbsolutePath()));
}
}The corresponding HTML page uses a simple form and XMLHttpRequest to show upload progress:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>upload</title>
</head>
<body>
upload
<form enctype="multipart/form-data">
<input type="file" name="fileInput" id="fileInput">
<input type="button" value="上传" onclick="uploadFile()">
</form>
上传结果 <span id="uploadResult"></span>
<script>
var uploadResult = document.getElementById("uploadResult");
function uploadFile() {
var fileInput = document.getElementById('fileInput');
var file = fileInput.files[0];
if (!file) return;
var xhr = new XMLHttpRequest();
xhr.upload.onprogress = function(event) {
var percent = 100 * event.loaded / event.total;
uploadResult.innerHTML = '上传进度:' + percent + '%';
};
xhr.onload = function() {
if (xhr.status === 200) {
uploadResult.innerHTML = '上传成功' + xhr.responseText;
}
};
xhr.onerror = function() { uploadResult.innerHTML = '上传失败'; };
xhr.open('POST', '/upload', true);
var formData = new FormData();
formData.append('file', file);
xhr.send(formData);
}
</script>
</body>
</html>Large File Chunked Upload
The frontend splits a file into 1 MB chunks, calculates the whole‑file MD5 using spark‑md5 , and uploads each chunk with additional metadata (total chunks, chunk size, chunk index, MD5). The key JavaScript functions are:
var chunkSize = 1 * 1024 * 1024; // 1 MB
function calculateFileMD5() { /* uses SparkMD5 to compute MD5 */ }
function sliceFile(file) { /* returns an array of Blob chunks */ }
function upload(data) { /* sends a chunk via XMLHttpRequest to /uploadBig */ }
function uploadFile() { /* orchestrates slicing, MD5 check, and per‑chunk upload */ }
function checkFile() { /* calls /checkFile to verify completeness */ }The backend provides two endpoints:
/uploadBig – receives each chunk along with chunkSize , totalNumber , chunkNumber , and md5 . It writes the chunk to the correct offset using RandomAccessFile and records the upload status in a .conf file.
/checkFile – reads the .conf file; if all bits are 1, it verifies the assembled file’s MD5 and returns the final path; otherwise it returns a string of bits indicating which chunks are missing.
Key backend code for /uploadBig :
@RestController
public class UploadController {
public static final String UPLOAD_PATH = "D:\\upload\\";
@RequestMapping("/uploadBig")
public ResponseEntity<Map<String, String>> uploadBig(@RequestParam Long chunkSize,
@RequestParam Integer totalNumber,
@RequestParam Long chunkNumber,
@RequestParam String md5,
@RequestParam MultipartFile file) throws IOException {
String dstFile = String.format("%s\\%s\\%s.%s", UPLOAD_PATH, md5, md5, StringUtils.getFilenameExtension(file.getOriginalFilename()));
String confFile = String.format("%s\\%s\\%s.conf", UPLOAD_PATH, md5, md5);
File dir = new File(dstFile).getParentFile();
if (!dir.exists()) {
dir.mkdir();
byte[] bytes = new byte[totalNumber];
Files.write(Path.of(confFile), bytes);
}
try (RandomAccessFile raf = new RandomAccessFile(dstFile, "rw");
RandomAccessFile rafConf = new RandomAccessFile(confFile, "rw");
InputStream is = file.getInputStream()) {
raf.seek(chunkNumber * chunkSize);
raf.write(is.readAllBytes());
rafConf.seek(chunkNumber);
rafConf.write(1);
}
return ResponseEntity.ok(Map.of("path", dstFile));
}
@RequestMapping("/checkFile")
public ResponseEntity<Map<String, String>> checkFile(@RequestParam String md5) throws Exception {
String confPath = String.format("%s\\%s\\%s.conf", UPLOAD_PATH, md5, md5);
Path path = Path.of(confPath);
if (!Files.exists(path.getParent())) {
return ResponseEntity.ok(Map.of("msg", "文件未上传"));
}
byte[] status = Files.readAllBytes(path);
StringBuilder sb = new StringBuilder();
for (byte b : status) sb.append(b);
if (!sb.toString().contains("0")) {
// all chunks uploaded – verify MD5 of assembled file
File dir = new File(String.format("%s\\%s", UPLOAD_PATH, md5));
for (File f : dir.listFiles()) {
if (!f.getName().contains("conf")) {
try (InputStream is = new FileInputStream(f)) {
String fileMd5 = DigestUtils.md5DigestAsHex(is);
if (!fileMd5.equalsIgnoreCase(md5)) {
return ResponseEntity.ok(Map.of("msg", "文件上传失败"));
}
}
return ResponseEntity.ok(Map.of("path", f.getAbsolutePath()));
}
}
}
// some chunks missing – return bit string
return ResponseEntity.ok(Map.of("chucks", sb.toString()));
}
}Breakpoint Resume and Instant Upload
When /checkFile returns a bit string with zeros, the frontend re‑uploads only the missing chunks, achieving breakpoint resume. If the MD5 already exists on the server, /checkFile returns the existing file path, enabling an instant ("秒传") upload without transmitting data again.
Configuration Tips
Increase Spring Boot limits in application.properties : spring.servlet.multipart.max-file-size=1024MB spring.servlet.multipart.max-request-size=1024MB
If using Nginx, set client_max_body_size 1024m; in the http{} block to avoid 413 errors.
Conclusion
The article presents a complete, production‑ready solution for small and large file uploads, covering front‑end chunking, MD5 verification, backend random‑access writing, status tracking, breakpoint resume, and instant upload, with all code ready for direct use.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.