Running AI on the Frontend: Pose Estimation with TensorFlow Lite MoveNet
This article explains how front‑end developers can run AI locally on web, Android, iOS and Raspberry Pi devices using TensorFlow Lite MoveNet for real‑time pose estimation, walks through setup, model variants, Python code examples, and practical applications such as yoga‑pose classification.
1. Front‑end Can Run AI
TensorFlow Lite provides a lightweight Lite runtime that supports Android, iOS, and Raspberry Pi, allowing AI models to execute directly on user devices. Web developers can also use TensorFlow.js to run the same models in browsers.
2. Pose Estimation Demo
The article demonstrates a pose‑estimation example using TensorFlow’s MoveNet models. MoveNet offers three families: Thunder (high accuracy, slower), Lightning (lightweight, fast), and Multipose (multiple people detection).
2.1 Getting Started with the Android Example
After downloading the official Android example from TensorFlow Lite Pose Estimation , open the README.md to see the environment requirements (Android Studio 4.2+, SDK API 21) and build steps.
2.2 Working Principle
The same MoveNet model runs on Android, iOS, Raspberry Pi, and browsers. It accepts either live camera frames or static images and outputs 17 body‑keypoints (nose, eyes, ears, shoulders, elbows, wrists, hips, knees, ankles) with confidence scores.
Keypoint output example (Python representation):
Person(
bounding_box=Rectangle(start_point=Point(x=140, y=72), end_point=Point(x=262, y=574)),
score=0.61832184,
keypoints=[
KeyPoint(body_part=BodyPart.NOSE, coordinate=Point(x=193, y=85), score=0.5537756),
...
KeyPoint(body_part=BodyPart.RIGHT_ANKLE, coordinate=Point(x=150, y=574), score=0.83522)
]
)2.3 Code Walkthrough
A minimal Python project is provided (GitHub: hlwgy/juejin_movenet ) with the following structure:
tflite model files
├─ movenet_lightning.tflite # fast, lightweight
├─ movenet_thunder.tflite # accurate, slower
├─ classifier.tflite # yoga‑pose classifier
└─ labels.txt # class names
data.py # data definitions
classifier.py # classification utilities
movenet.py # pose‑estimation wrapper
main.py # demo entry point
test.jpeg # sample imageRequired environment: Python 3.8, numpy , opencv‑python (install via pip install numpy and pip install opencv-python ).
Example code to extract pose data from an image:
import cv2
from movenet import Movenet
pose_detector = Movenet('tflite/movenet_thunder')
input_image = cv2.imread('test.jpeg')
person = pose_detector.detect(input_image)
print(person)Drawing the results on the image:
bounding_box = person.bounding_box
keypoints = person.keypoints
cv2.rectangle(input_image, bounding_box.start_point, bounding_box.end_point, (255, 0, 0), 2)
for kp in keypoints:
cv2.circle(input_image, kp.coordinate, 2, (0, 0, 255), 4)
cv2.imwrite('output.png', input_image)Real‑time camera loop (simplified):
cap = cv2.VideoCapture(0)
while cap.isOpened():
success, frame = cap.read()
frame = cv2.flip(frame, 1)
person = pose_detector.detect(frame)
# ...process and display...2.3.2 Applying Pose Data
The 17 keypoints can be fed into a classifier model to recognize actions such as yoga poses. Using the provided classifier.tflite , the sample image test.jpeg is classified as the "tree" pose.
Developers can train their own TensorFlow models for custom actions and integrate the results into interactive applications (e.g., animating avatars based on user movements).
3. Source Code, Models and Resources
The repository includes Python code, TensorFlow Lite model files for Lightning, Thunder, and Multipose, as well as Android and iOS source projects and a pre‑built tf_app-release.apk for quick testing.
4. References
Official TensorFlow Lite pose‑estimation guide, TensorFlow.js pose‑detection blog, and the GitHub example repository are listed for further reading.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.