iQIYI Technical Product Team
Jan 17, 2020 · Artificial Intelligence
Ultrafast Video Attention Prediction with Coupled Knowledge Distillation
The paper presents UVA‑Net, a lightweight video‑attention network trained via coupled knowledge distillation, which matches the accuracy of eleven state‑of‑the‑art models while using only 0.68 MB of storage and achieving up to 10,106 FPS on GPU (404 FPS on CPU), thanks to a MobileNetV2‑based CA‑Res block and a teacher‑student framework that leverages low‑resolution inputs to drastically cut parameters and computational cost.
Mobile Video ProcessingUVA-Netknowledge distillation
0 likes · 5 min read