Artificial Intelligence 12 min read

Intelligent Restoration System for Legacy Video Quality

Bilibili’s Multimedia Lab created an end‑to‑end intelligent restoration system that assesses video resolution, frame‑rate and quality, then automatically selects and applies image‑level enhancement, frame‑rate up‑sampling, background and face restoration, and optical‑flow interpolation to transform blurry, jittery, artifact‑laden legacy videos into clear, smooth, high‑quality streams, now deployed for on‑demand content and slated for live‑stream expansion.

Bilibili Tech
Bilibili Tech
Bilibili Tech
Intelligent Restoration System for Legacy Video Quality

Legacy video content on Bilibili suffers from various quality issues such as blur, stutter, and compression artifacts due to old recording equipment, low resolution, and multiple compression cycles.

The article first categorizes these problems into three subjective sensations: blur (caused by low resolution and digitization loss), stutter (low frame rate, typically 24‑25 fps), and artifact (compression‑induced spikes and speckles).

To address these issues, the Bilibili Multimedia Lab developed an end‑to‑end intelligent restoration system. The system takes raw video streams as input and outputs enhanced streams after applying a series of targeted modules.

The pipeline consists of a quality‑assessment module that extracts resolution, frame‑rate, and VQA scores, and a strategy‑decision module that selects appropriate restoration paths (frame‑rate up‑sampling, image‑level enhancement, or both).

When both frame‑rate and image quality are insufficient, the video is decoded into N frames, processed by an image‑restoration module, and then passed to an interpolation module that generates R × (N‑1)+1 frames, finally re‑encoded.

The image‑restoration module contains two sub‑modules: background restoration (denoising, line repair, detail enhancement) and face restoration (texture recovery, facial feature sharpening). Both are trained by synthesizing low‑quality images from high‑quality references and learning a reverse mapping.

Low‑quality image synthesis follows a carefully designed degradation pipeline that mimics the characteristics of old media, as described in the referenced “Video Super‑Resolution” article.

The interpolation module is based on optical‑flow estimation, generating intermediate frames by weighted sampling of neighboring frames. Specific defects such as large‑motion errors, repetitive textures, and text distortion have been mitigated.

Extensive visual comparisons (Figures 7‑12) demonstrate that the system can turn blurry, jittery, and artifact‑laden footage into clear, smooth, and clean video, covering both live‑action dramas and animated series.

Currently deployed in Bilibili’s on‑demand service, the system supports intelligent clarity enhancement and will be further expanded to live streaming, broader coverage, and additional quality dimensions such as HDR and color correction.

AImultimediaimage enhancementvideo restorationframe interpolation
Bilibili Tech
Written by

Bilibili Tech

Provides introductions and tutorials on Bilibili-related technologies.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.