Backend Development 15 min read

Implementing Fast File Upload: Instant Transfer, Chunked Upload, and Resume Support with Java

This article explains three advanced file‑upload techniques—instant transfer, chunked upload, and resumable upload—and provides complete Java backend implementations using Redis, RandomAccessFile, and MappedByteBuffer to achieve efficient large‑file handling.

Java Architect Essentials
Java Architect Essentials
Java Architect Essentials
Implementing Fast File Upload: Instant Transfer, Chunked Upload, and Resume Support with Java

File upload is a common requirement, and for large files the traditional whole‑file approach is inefficient; this article introduces three advanced upload methods—instant transfer (秒传), chunked upload (分片上传), and resumable upload (断点续传)—and provides a complete Java backend implementation.

Instant Transfer (秒传)

The server first checks the MD5 of the incoming file; if a file with the same MD5 already exists, it returns a new URL without re‑uploading the data. Changing the MD5 (e.g., by modifying the file content) disables instant transfer.

Core Logic

Upload status is stored in Redis using the file MD5 as the key. A flag indicates whether the upload is complete; if true, subsequent uploads trigger the instant‑transfer path.

Chunked Upload (分片上传)

The file is split into equal‑size parts (Parts) on the client side and each part is uploaded separately. After all parts are received, the server merges them into the original file.

Typical Scenarios

Large file uploads

Unstable network environments where retransmission is likely

Resumable Upload (断点续传)

Resumable upload divides the file into multiple parts, each uploaded by a separate thread. If a network failure occurs, the client can continue uploading from the last successful part instead of restarting.

Implementation Steps

1. Split the file into fixed‑size chunks on the client and send the chunk index and size with each request.

2. On the server, create a .conf file to record the upload status of each chunk; a byte value of 127 (Byte.MAX_VALUE) marks a completed chunk.

3. When a chunk arrives, write it to the correct offset in the temporary file using either RandomAccessFile or MappedByteBuffer .

Backend Code

RandomAccessFile implementation

@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {

    @Autowired
    private FilePathUtil filePathUtil;

    @Value("${upload.chunkSize}")
    private long defaultChunkSize;

    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile accessTmpFile = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            accessTmpFile = new RandomAccessFile(tmpFile, "rw");
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            accessTmpFile.seek(offset);
            accessTmpFile.write(param.getFile().getBytes());
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.close(accessTmpFile);
        }
        return false;
    }
}

MappedByteBuffer implementation

@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {

    @Autowired
    private FilePathUtil filePathUtil;

    @Value("${upload.chunkSize}")
    private long defaultChunkSize;

    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile tempRaf = null;
        FileChannel fileChannel = null;
        MappedByteBuffer mappedByteBuffer = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            tempRaf = new RandomAccessFile(tmpFile, "rw");
            fileChannel = tempRaf.getChannel();
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            byte[] fileData = param.getFile().getBytes();
            mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
            mappedByteBuffer.put(fileData);
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.freedMappedByteBuffer(mappedByteBuffer);
            FileUtil.close(fileChannel);
            FileUtil.close(tempRaf);
        }
        return false;
    }
}

Core template class

@Slf4j
public abstract class SliceUploadTemplate implements SliceUploadStrategy {

    public abstract boolean upload(FileUploadRequestDTO param);

    protected File createTmpFile(FileUploadRequestDTO param) {
        FilePathUtil filePathUtil = SpringContextHolder.getBean(FilePathUtil.class);
        param.setPath(FileUtil.withoutHeadAndTailDiagonal(param.getPath()));
        String fileName = param.getFile().getOriginalFilename();
        String uploadDirPath = filePathUtil.getPath(param);
        String tempFileName = fileName + "_tmp";
        File tmpDir = new File(uploadDirPath);
        File tmpFile = new File(uploadDirPath, tempFileName);
        if (!tmpDir.exists()) {
            tmpDir.mkdirs();
        }
        return tmpFile;
    }

    @Override
    public FileUploadDTO sliceUpload(FileUploadRequestDTO param) {
        boolean isOk = this.upload(param);
        if (isOk) {
            File tmpFile = this.createTmpFile(param);
            FileUploadDTO fileUploadDTO = this.saveAndFileUploadDTO(param.getFile().getOriginalFilename(), tmpFile);
            return fileUploadDTO;
        }
        String md5 = FileMD5Util.getFileMD5(param.getFile());
        Map
map = new HashMap<>();
        map.put(param.getChunk(), md5);
        return FileUploadDTO.builder().chunkMd5Info(map).build();
    }

    public boolean checkAndSetUploadProgress(FileUploadRequestDTO param, String uploadDirPath) {
        String fileName = param.getFile().getOriginalFilename();
        File confFile = new File(uploadDirPath, fileName + ".conf");
        byte isComplete = 0;
        RandomAccessFile accessConfFile = null;
        try {
            accessConfFile = new RandomAccessFile(confFile, "rw");
            accessConfFile.setLength(param.getChunks());
            accessConfFile.seek(param.getChunk());
            accessConfFile.write(Byte.MAX_VALUE);
            byte[] completeList = FileUtils.readFileToByteArray(confFile);
            isComplete = Byte.MAX_VALUE;
            for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
                isComplete = (byte) (isComplete & completeList[i]);
            }
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.close(accessConfFile);
        }
        return setUploadProgress2Redis(param, uploadDirPath, fileName, confFile, isComplete);
    }

    private boolean setUploadProgress2Redis(FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) {
        RedisUtil redisUtil = SpringContextHolder.getBean(RedisUtil.class);
        if (isComplete == Byte.MAX_VALUE) {
            redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "true");
            redisUtil.del(FileConstant.FILE_MD5_KEY + param.getMd5());
            confFile.delete();
            return true;
        } else {
            if (!redisUtil.hHasKey(FileConstant.FILE_UPLOAD_STATUS, param.getMd5())) {
                redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "false");
                redisUtil.set(FileConstant.FILE_MD5_KEY + param.getMd5(), uploadDirPath + FileConstant.FILE_SEPARATORCHAR + fileName + ".conf");
            }
            return false;
        }
    }

    public FileUploadDTO saveAndFileUploadDTO(String fileName, File tmpFile) {
        FileUploadDTO fileUploadDTO = null;
        try {
            fileUploadDTO = renameFile(tmpFile, fileName);
            if (fileUploadDTO.isUploadComplete()) {
                // TODO: persist file metadata
            }
        } catch (Exception e) {
            log.error(e.getMessage(), e);
        }
        return fileUploadDTO;
    }

    private FileUploadDTO renameFile(File toBeRenamed, String toFileNewName) {
        FileUploadDTO fileUploadDTO = new FileUploadDTO();
        if (!toBeRenamed.exists() || toBeRenamed.isDirectory()) {
            log.info("File does not exist: {}", toBeRenamed.getName());
            fileUploadDTO.setUploadComplete(false);
            return fileUploadDTO;
        }
        String ext = FileUtil.getExtension(toFileNewName);
        String p = toBeRenamed.getParent();
        String filePath = p + FileConstant.FILE_SEPARATORCHAR + toFileNewName;
        File newFile = new File(filePath);
        boolean uploadFlag = toBeRenamed.renameTo(newFile);
        fileUploadDTO.setMtime(DateUtil.getCurrentTimeStamp());
        fileUploadDTO.setUploadComplete(uploadFlag);
        fileUploadDTO.setPath(filePath);
        fileUploadDTO.setSize(newFile.length());
        fileUploadDTO.setFileExt(ext);
        fileUploadDTO.setFileId(toFileNewName);
        return fileUploadDTO;
    }
}

Summary

Successful chunked upload requires strict coordination of chunk size between client and server; a file server (e.g., FastDFS, HDFS) is typically needed. In a 4‑core, 8 GB environment, uploading a 24 GB file takes about 30 minutes, with most time spent on client‑side MD5 calculation. For projects that only need simple upload/download, using a cloud object storage service such as Alibaba OSS is recommended, though it may not suit heavy delete/modify scenarios.

Links to OSS form‑upload demos and further resources are provided at the end of the original article.

backendRedisFile Uploadchunked uploadresume uploadRandomAccessFile
Java Architect Essentials
Written by

Java Architect Essentials

Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.