Backend Development 11 min read

Server‑Side Video Rendering for a Mini‑Program Using After Effects, Aerender, and FFmpeg

This article describes how a PHP‑based backend service integrates After Effects' aerender command‑line tool and FFmpeg to automatically generate, split, compress, and deliver video templates for a mini‑program, covering requirement analysis, architecture, rendering workflow, code snippets, and performance optimizations.

Huajiao Technology
Huajiao Technology
Huajiao Technology
Server‑Side Video Rendering for a Mini‑Program Using After Effects, Aerender, and FFmpeg

As a camera app, providing cool video effects is a core feature, and the team documented the entire process from requirement gathering to implementation for the "XiaoBai" camera mini‑program.

Requirement Overview

The product aims to attract new users and retain existing ones by offering three main capabilities:

Diverse effects created with Adobe After Effects (AE) templates, ensuring output matches AE rendering quality.

Rapid rollout of new templates to keep users engaged.

Compatibility with mini‑programs, meaning rendering must be performed on the front‑end or back‑end rather than on the client.

Solution Discussion and Decision

A PHP engineer, unfamiliar with video processing, explored two initial options:

Implement effects with CSS/JS on the front‑end – rejected because the result is not a downloadable video and each new template would require front‑end development.

Use open‑source tools such as MLT Multimedia Framework for server‑side rendering – viable but required converting AE effects to MLT XML, adding significant workload for designers.

Both conventional solutions were unsatisfactory, leading the team to discover AE's command‑line program aerender.exe . By allowing users to upload assets and invoking aerender.exe for automated rendering, the approach met all product requirements with reduced manpower, and the team proceeded with this design.

AE Overview

Adobe After Effects is a non‑linear editing tool for creating motion graphics and visual effects. It uses a layer‑based system with keyframes, supporting both 2D and 3D compositions, and can be extended with third‑party plugins.

FFmpeg Overview

FFmpeg is an open‑source suite for recording, converting, and streaming audio/video. It includes the libavcodec library for high‑quality encoding/decoding and is licensed under LGPL/GPL.

Workflow

The end‑to‑end pipeline consists of four steps:

User side (mini‑program/APP): select a template, upload required media, and call the backend API.

Backend (nginx + PHP): receive the request, store data, and push a rendering task to a message queue.

Rendering service (Windows + Go): consume tasks, load the AE project, replace placeholder assets, invoke aerender.exe , and push the result back to the queue.

Backend (PHP): consume the result, upload the video to CDN, and return the video URL to the user, with timeout and retry mechanisms.

Video Generation Details

Each rendering task includes the template ID, user‑uploaded assets (varying per template), and user metadata for result handling. The Go service processes the task by:

Downloading the AE project file.

Copying the project to a new workspace and replacing placeholder assets with the user's media.

Calling aerender.exe to render the composition.

The basic aerender command looks like:

aerender -project c:\projects\project_1.aep -comp "Composition_1" -output c:\output\project_1\project_1.avi

After rendering, the video is uploaded to CDN and the link is returned to the PHP layer.

Optimizing Rendering Speed

Rendering a 10‑second, 180‑frame video takes about 30 seconds. To accelerate, the video is split into five 36‑frame segments using the -s (start) and -e (end) parameters:

aerender.exe -project c:\projects\project_1.aep -comp "Composition_1" -s 1 -e 36 -RStemplate "Multi-Machine Settings" -OMtemplate "Multi-Machine Sequence" -output c:\output\project_1\frames_1.mov

These segments are then concatenated with FFmpeg:

ffmpeg -f concat -i "c:\output\project_1\file.txt" -c copy "c:\output\project_1\outmerge.mov"

All fragments must share the same format, resolution, and bitrate to avoid additional transcoding overhead.

Compressing and Transcoding with FFmpeg

For content where visual changes are minimal, the team reduces frame rate and audio bitrate, converting the merged file to MP4 using H.264 and AAC codecs (e.g., 18 fps video, 32 kbit/s audio):

ffmpeg -i c:\output\project_1\outmerge.mov -c:v libx264 -pix_fmt yuv420p c:\output\project_1\out.mp4

Future Work

Cluster management for Windows rendering nodes.

Further performance improvements.

Automated template generation.

Evaluation of more mature alternative solutions.

BackendGoPHPffmpegvideo renderingAfter Effects
Huajiao Technology
Written by

Huajiao Technology

The Huajiao Technology channel shares the latest Huajiao app tech on an irregular basis, offering a learning and exchange platform for tech enthusiasts.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.