Frontend Development 18 min read

Optimizing File Upload Performance with HTML5: Comparison with Flash, Concurrency, Chunking, and Resumable Uploads

Using HTML5 instead of Flash, the article explains how to boost file‑upload speed by compressing or merging files before transfer, employing optimal concurrency levels, splitting files into chunks for fault‑tolerant, resumable and instant uploads, and choosing appropriate chunk sizes to balance overhead and performance.

Baidu Tech Salon
Baidu Tech Salon
Baidu Tech Salon
Optimizing File Upload Performance with HTML5: Comparison with Flash, Concurrency, Chunking, and Resumable Uploads

Earlier at the w3ctech technical exchange I presented an HTML5‑based file upload component. Because the original PPT contained very limited information, I have organized the material into this article for interested readers.

HTML VS FLASH

Many developers still rely on Flash for file uploads, but HTML5 can not only replace Flash but also provide superior functionality.

The following results are from a functional comparison and performance tests between HTML5 and Flash.

Paste‑upload (Ctrl+V) works when the clipboard contains image data. Images can be copied from screenshot tools, right‑click → copy in browsers, or from chat applications.

Overall, HTML5 offers more features and a clear performance advantage over Flash.

However, if the market share of HTML5‑compatible browsers is low, adopting HTML5 alone may not bring sufficient benefit.

Below is the global browser market share for March 2014 provided by TNW.

From the data, about 64.5% of browsers support HTML5, which, together with its advantages, makes a strong case for using HTML5 for file uploads.

Nevertheless, roughly 35% of browsers still lack HTML5 support. To maintain compatibility, WebUploader implements both HTML5 and Flash runtimes, automatically falling back to Flash when HTML5 is unavailable.

How to Optimize File Upload Performance?

Performance optimization can be approached from two angles: pre‑upload optimization and in‑upload optimization.

Pre‑upload Optimization

Two main ideas:

Reduce file size to lower upload traffic.

Merge small files to reduce the number of requests.

Based on these ideas, we tried several solutions.

Image Compression High‑resolution photos (e.g., 5184×3456, 5 MB) were resized via JavaScript to 1600×1600, reducing the size to 407 KB and saving 4.5 MB of traffic.

ZIP Merging Small Files Similar to zipping a folder before copying to a USB drive, merging small files into a ZIP reduces request count. However, ZIP processing is slow; only on 2 G networks does it improve speed, while on 3 G or Wi‑Fi it actually slows down.

SPRITE Merging Small Images Using a canvas‑based sprite to combine many small images into one large image proved about 10× faster than ZIP, yielding roughly a 20% speed boost. This method is limited to image files and requires server‑side logic to reconstruct the originals.

Direct Concatenation of File Contents After reading files into an ArrayBuffer, they can be concatenated directly and sent as a Blob or ArrayBuffer, which should be more reliable than ZIP or SPRITE approaches.

In‑upload Optimization

The main techniques are concurrency and chunking.

Concurrent Upload

Single requests often cannot saturate network bandwidth, so we tested uploading 20 × 1 MB files with varying concurrency levels.

Higher concurrency yields faster uploads, but also increases server load. The optimal concurrency appears to be 3, as gains diminish beyond that point.

Typical browser maximum concurrent connections:

Browser

HTTP 1.1

HTTP 1.0

IE 6, 7

2

4

IE 8

6

6

Firefox 2

2

8

Firefox 3

6

6

Safari 3, 4

4

4

Chrome 1, 2

6

?

Chrome 3

4

4

Chrome 4+

6

?

Why is concurrency faster?

Possible reasons:

Multiple requests can utilize more bandwidth.

Servers may throttle single connections.

For cross‑origin requests, concurrent uploads can share a single OPTIONS preflight request.

Without concurrency, each file upload triggers its own OPTIONS request; with concurrency, several uploads share one OPTIONS request.

Chunked Upload

Why use chunked upload?

Large files take a long time to upload and are vulnerable to network interruptions. Chunking allows only the failed chunk to be retransmitted, reducing overhead.

What is chunked upload?

Chunked upload splits a large file into smaller pieces and transfers them individually. If a transmission is interrupted, only the affected chunk needs to be resent.

Advantages of chunked upload

Stronger fault tolerance – only the erroneous chunk is retransmitted.

Pause and resume capability – after a chunk finishes, the client can pause before sending the next chunk.

Leverage concurrency for speedup – multiple chunks can be uploaded in parallel.

More precise progress tracking – each chunk’s successful receipt can be confirmed, improving UI feedback.

Chunking introduces extra network overhead because a single logical request becomes many smaller requests.

Testing three 30 MB files with three concurrent uploads and varying chunk sizes produced the following overall time consumption chart:

Smaller chunks increase total time, especially when chunk size drops to 256 KB.

How to choose an appropriate chunk size?

Consider three factors:

Too small chunks generate many requests and high overhead.

Too large chunks reduce the benefits of chunking.

Server buffer size (client_body_buffer_size) – chunk size should be a multiple of this value.

Recommended chunk sizes are 2 MB–5 MB, depending on the typical file size distribution.

Resumable Upload

With chunked upload, the server can record successfully received chunks. The client can then skip already uploaded chunks, enabling resumable uploads.

Benefits of resumable upload:

Save bandwidth by avoiding duplicate chunk transmission.

Reduce user waiting time.

Allow recovery after interruptions, even across browser refreshes or device changes.

Identifying a chunk uniquely is essential. Simple schemes like filename + chunk index are insufficient. Using an MD5 hash of the chunk content provides a reliable unique identifier.

Before uploading a chunk, compute its MD5; if the server already has a chunk with that MD5, skip the upload.

Instant (秒传) Upload

When a chunk can be instantly verified, the whole file can also be instantly uploaded if its MD5 is already known.

Files smaller than 10 MB can be processed in under a second; larger files (e.g., 200 MB) take about 13 seconds for MD5 calculation, which is negligible compared to actual transfer time.

Further optimizations:

Overlap verification with current file transfer – while the first file uploads, compute MD5 for the next file.

Prioritize smaller files – processing the smallest file first reduces overall waiting time.

Use partial MD5 – for certain binary formats (e.g., JPEG), hashing only the header segment (<10 MB) can uniquely identify the file, speeding up the process.

References

WebUploader

How to Implement Resumable and Instant Upload with WebUploader

ZIP Solution Research

SPRITE Solution Research

Upload Research

Important Broadcast

Welcome to Baidu Tech Salon. If you like the article, click the top‑right corner to send to a friend or share to Moments . You can follow Baidu Tech Salon by searching the WeChat ID bdtechsalon or scanning the QR code below.

performance optimizationconcurrencyFile UploadHTML5Chunked Uploadresumable uploadWebUploader
Baidu Tech Salon
Written by

Baidu Tech Salon

Baidu Tech Salon, organized by Baidu's Technology Management Department, is a monthly offline event that shares cutting‑edge tech trends from Baidu and the industry, providing a free platform for mid‑to‑senior engineers to exchange ideas.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.