Artificial Intelligence 8 min read

No-Reference Image Sharpness Assessment Based on Strong Edge Validity Statistics

The paper proposes a no‑reference image sharpness metric that computes strong‑edge validity statistics—ratio of maximum directional gradient sum to squared strong‑edge count—across image blocks, classifies them into grades, and effectively handles defocus and motion blur for applications such as video thumbnail selection.

Xianyu Technology
Xianyu Technology
Xianyu Technology
No-Reference Image Sharpness Assessment Based on Strong Edge Validity Statistics

Abstract

Image sharpness evaluation is a crucial component of image quality assessment, relevant to autofocus, compression, video thumbnail extraction, etc. It can be reference‑based or no‑reference.

Blur Types

Blur may arise from defocus, motion, noise, or distortion. Existing works [2] and [3] address defocus and motion blur respectively.

Defocus Blur Evaluation

Method in [2] uses Sobel edge detection, builds a histogram of edge widths and computes a score. It works well for high‑SNR images but fails on low‑SNR or severe motion blur.

Motion Blur Detection

Method in [3] finds the minimum gradient direction in a block and derives the dominant motion direction, but may produce misleading results on non‑blurred images.

Proposed “Strong Edge Validity Statistics”

We define edge validity as the ratio of “maximum directional gradient sum” to the square of the number of strong edge points within a region. Regions with higher validity indicate clearer content.

Using the gradient computation from [3], we obtain the maximum directional gradient sum and compute validity for each block.

Validity Computation

Key formulas involve d11, d22, d12 (squared gradients and cross terms) and eigenvalue‑like calculations to extract the dominant gradient direction.

Block‑Level Statistics

Blocks are classified into intervals (0‑100, 100‑250, …, 1000‑2000). The proportion of high‑validity blocks correlates with perceived sharpness.

Quality Grading

Based on validity distribution, images are assigned coarse grades HIGH, MEDIUM, LOW. HIGH‑grade images are further evaluated with the method of [2] for fine scoring.

Conclusion

The proposed no‑reference metric combines strong‑edge width and validity statistics, handling both defocus and motion blur. It can be applied to video thumbnail selection and complements deep‑learning approaches such as Google’s NIMA.

computer visionimage qualityNo-Referenceedge statisticssharpness assessment
Xianyu Technology
Written by

Xianyu Technology

Official account of the Xianyu technology team

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.