Frontend Development 9 min read

Web Implementation of Transparent Video Gift Animations Using Canvas and WebGL

The article describes how a live‑room video‑gift feature originally built for mobile was ported to a web client by extracting separate color and alpha video streams, compositing them on canvas, then migrating the per‑pixel blending to WebGL shaders, which cut CPU usage dramatically, raise frame rates to about 60 FPS, and outline further optimisations such as pre‑loading, mobile support, and possible MSE or WebAssembly approaches.

Tencent Music Tech Team
Tencent Music Tech Team
Tencent Music Tech Team
Web Implementation of Transparent Video Gift Animations Using Canvas and WebGL

In 2019 the K‑Song mobile client introduced a video‑gift animation capability in live rooms, using specially prepared video resources with a channel export based on the Penguin eSports VAPX solution. This enabled fine‑grained video animation playback and solved the problem of transparent video backgrounds.

When the new PC broadcaster client was built, the main UI was rebuilt with web technologies. Because there was no web SDK for gift animations at that time, the web side reused the existing animation resources by combining video with canvas / WebGL .

0. Business Process

Upload multiple video samples to the configuration platform and fill in metadata (type, orientation, size, etc.).

The backend generates a gift ID, stores the video on a CDN, and records the entry in the database.

The front‑end fetches the video URL and configuration parameters via an API.

The front‑end triggers the playback of the gift animation.

1. Implementation Logic

Each video file contains two parts: the original animation and a separately exported alpha channel (different size/orientation). By extracting the RGB values of the animation and the grayscale alpha values frame‑by‑frame, they are blended into an rgba(R, G, B, A) pixel, achieving a transparent background.

2. Minimal Solution

The simplest approach uses a hidden <video> element. While the video plays, each frame is drawn onto a canvas with drawImage , the ImageData is read, the RGB and alpha channels are mixed, and the result is written back with putImageData . Two canvases are used: an off‑screen canvas for pixel manipulation and an on‑screen canvas for display. This quick prototype works for a single demo.

3. Loading Issues

When multiple animations are rendered simultaneously under limited bandwidth, buffering causes stutter. To avoid in‑play waiting, the video is fully downloaded via XHR2, converted to a Blob , and then fed to the video element. This eliminates runtime buffering but adds an initial preparation delay. A Service Worker with a Cache‑First strategy (using Workbox) caches the large video assets (2‑5 MB each) and pre‑loads frequently used gifts during idle time.

4. CPU Consumption

Scaling the test to eight concurrent animations shows CPU usage near 100 %. Profiling reveals the bottleneck in getImageData / drawImage and the per‑pixel RGBA computation (e.g., a 1440×1152 frame requires processing over 6.6 million pixels). The heavy CPU load limits FPS.

5. Switching to WebGL

To offload the mixing to the GPU, the video is used as a texture. Two textures are created: one for the color video and one for the alpha video. Vertex and fragment shaders blend the two textures per fragment. The vertex shader defines position and texture coordinate attributes; the fragment shader samples both textures, combines the RGB from the color texture with the alpha from the alpha texture, and outputs the final color.

After integrating the shaders and rendering each frame as a texture, CPU usage drops from 60‑100 % to 20‑30 %, GPU usage rises modestly, and FPS improves from ~4‑5 to around 20, a 3‑4× gain. With 4‑5 simultaneous animations the system stabilises at ~60 FPS, which meets the business requirements.

6. Summary

Adopting WebGL opened a new optimisation path. Future work includes reducing cold‑start latency, adapting the solution for mobile browsers, implementing stutter detection, and exploring alternatives such as Media Source Extensions (MSE) or WebAssembly to bypass the video‑to‑WebGL step.

Front-endperformance optimizationcanvasVideoWebGLalpha-blending
Tencent Music Tech Team
Written by

Tencent Music Tech Team

Public account of Tencent Music's development team, focusing on technology sharing and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.