Artificial Intelligence 5 min read

Video Stutter Detection Using Frame Difference and Dynamic Factor Analysis

This article describes a video stutter detection method that converts video to a frame sequence, computes pixel differences between consecutive frames, aggregates motion energy, applies scene‑cut compensation and a dynamic threshold to output a binary indicator of stutter presence.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Video Stutter Detection Using Frame Difference and Dynamic Factor Analysis

Background: Detecting video stutter is a key quality metric in video quality assessment, and identifying stutter is essential when building a video detection platform.

The proposed solution converts the uploaded video into an image sequence using ffmpeg, extracts pixel motion information from each frame, computes average motion levels, derives a dynamic factor, and finally outputs a binary result where 0 indicates no stutter and 1 indicates stutter.

Overall scheme (six parts): 1) Image processing, 2) Adjacent‑frame pixel difference calculation, 3) Aggregation of motion amounts into a motion set, 4) Removal of scene‑cut influence and calculation of average motion, 5) Computation of a dynamic factor, 6) Return of the detection result.

Technical advantages: • No large training dataset required; computation is performed directly on the target video. • Robust against both dynamic and static scenes, preventing scene changes from affecting detection. • Accurate and efficient with relatively low computational load.

Implementation details:

1. Image processing: Convert frames to grayscale, resize to 360×640, and define a region of interest to reduce noise.

2. Adjacent‑frame calculation: Subtract pixel values of frame t+1 from frame t; if the absolute difference exceeds a constant threshold (30), the pixel is considered motion.

3. Motion amount calculation: Square the differences from step 2 to obtain energy, then compute the average energy per frame (TI2).

4. Scene‑cut compensation and average motion: Remove a proportion (2%) of frames representing scene cuts, sort TI2 values, discard extreme low and high values, and compute the cumulative average TI2 (TI2_AVG).

5. Dynamic factor: Use Dfact = a + b * log(TI2_AVG) with constants a=2.5, b=1.25, c=0.1; clamp Dfact to the range [0, 0.1].

6. Result generation: For each frame, if TI2 ≤ Dfact × Mdrop (Mdrop = 0.015) the frame is marked as stutter (1); otherwise it is marked as normal (0). The ordered list of frame results constitutes the final stutter detection output.

Effect demonstration: Sample screenshots show selected consecutive frames and the corresponding detection results.

algorithmimage processingVideoframe analysisstutter detection
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.