Artificial Intelligence 12 min read

Face Swapping in Python Using dlib, OpenCV, and Procrustes Alignment

This article demonstrates how to create a Python script that automatically detects facial landmarks with dlib, aligns two images using Procrustes analysis, corrects color differences, and blends the faces together with OpenCV, providing complete code and step‑by‑step explanations.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Face Swapping in Python Using dlib, OpenCV, and Procrustes Alignment

This tutorial explains how to build a concise (~200‑line) Python script that swaps faces between two images by automatically detecting facial landmarks, aligning the faces, correcting color mismatches, and blending the result.

1. Detect facial landmarks with dlib – The script loads the dlib shape predictor and uses detector = dlib.get_frontal_face_detector() and predictor = dlib.shape_predictor(PREDICTOR_PATH) . The get_landmarks(im) function returns a 68×2 matrix of (x, y) coordinates for each face, raising TooManyFaces or NoFaces exceptions when appropriate.

2. Align the faces using Procrustes analysis – The transformation_from_points(points1, points2) function implements orthogonal Procrustes: it recenters the point sets, scales them by their standard deviations, and computes the optimal rotation via singular value decomposition. The resulting affine matrix M is used with OpenCV’s warpAffine to map the second image onto the first.

3. Color correction – Because lighting and skin tone differ, the correct_colours(im1, im2, landmarks1) function applies Gaussian blur (controlled by COLOUR_CORRECT_BLUR_FRAC = 0.6 ) to both images, then scales the second image’s colors to match the first on a per‑pixel basis.

4. Create face masks and blend – The script defines landmark groups (eyes, eyebrows, nose, mouth) and builds convex hull masks with draw_convex_hull and get_face_mask . Masks are feathered ( FEATHER_AMOUNT = 11 ) and combined using numpy.max . The final output is computed as: output_im = im1 * (1 - combined_mask) + warped_corrected_im2 * combined_mask

5. Full implementation – The article provides the complete source code, including imports ( cv2 , dlib , numpy ), constant definitions, helper classes, and the main execution block that reads two images, computes the transformation, applies masking, color correction, and writes the result to output.jpg .

By following the step‑by‑step guide and using the provided code, readers can understand and reproduce a practical face‑swap pipeline that combines computer‑vision techniques with linear‑algebraic alignment.

computer visionPythonface swappingopencvdlibProcrustes
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.