Artificial Intelligence 10 min read

Real-Time Super-Resolution Algorithm for League of Legends S12 Live Streaming

A lightweight real‑time super‑resolution network was created for the 2022 League of Legends S12 World Championship, using pixel‑unshuffle/shuffle, structural re‑parameterization, and a multi‑loss (L1, perceptual, Sobel‑based texture, GAN) training pipeline that upscales 1080p streams to 4K at 75 fps on a V100 GPU, delivering clearer textures and reduced noise while remaining computationally efficient.

Bilibili Tech
Bilibili Tech
Bilibili Tech
Real-Time Super-Resolution Algorithm for League of Legends S12 Live Streaming

The 2022 League of Legends S12 World Championship attracted a massive audience, prompting the development of a real‑time super‑resolution (SR) algorithm tailored for live game streaming. The algorithm enhances visual quality by upscaling 1080p video to 4K while maintaining a processing speed of 75 fps.

Lightweight Game Live Streaming – Real‑Time SR Model Design The proposed SR network compresses parameters as much as possible and employs pixel‑unshuffle to reduce feature‑map size, followed by pixel‑shuffle for image reconstruction. These operations are computationally cheap, lossless, and reversible.

The model also adopts structural re‑parameterization: during training, a multi‑branch architecture (conv3×3, conv1×1 residual, identity residual) improves convergence and fitting ability; during inference, an Op‑fusion strategy merges the branches into a single conv3×3 layer for faster deployment.

Loss Function Design for Texture Detail Preservation Training uses a multi‑loss strategy: L1 loss, perceptual loss (based on a pre‑trained network), Sobel‑based texture loss (edge‑weighted L1), and GAN loss. Ablation studies show that adding perceptual and Sobel‑based losses significantly improves high‑frequency texture reconstruction compared to using L1 alone.

Multi‑Stage Warm‑Start Training Strategy Two datasets are constructed: Dataset I (high‑quality game scenes paired with synthetically degraded low‑quality versions) and Dataset II (real low‑quality live streams paired with high‑quality outputs from a non‑real‑time SR model). Training proceeds in three stages: model pre‑heat (L1 only on Dataset I), fine‑tuning (adding perceptual and texture losses on Dataset II), and artifact adjustment (GAN‑based fine‑tuning on Dataset I). Configuration details are summarized in Table 1.

Results Demonstration Deployed on a V100 GPU, the model processes each 1080p frame in 13 ms, achieving 75 fps on a single card. Visual comparisons on League of Legends streams show clearer textures, reduced noise around health bars, and enhanced detail on terrain.

Conclusion and Outlook The algorithm provides an efficient, real‑time solution for high‑quality game live streaming. Future work includes extending the method to other real‑time streaming scenarios and integrating 3‑D super‑resolution techniques.

deep learningmodel compressiongame streamingloss functionspixel shufflereal-time super-resolution
Bilibili Tech
Written by

Bilibili Tech

Provides introductions and tutorials on Bilibili-related technologies.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.