Databases 10 min read

Evolution of an LNMP Architecture: From a Single Server to a Scalable Multi‑Node Deployment

This article chronicles the step‑by‑step evolution of a LNMP‑based website, detailing how increasing traffic drove architectural changes such as separating web and database servers, adding memcached, implementing MySQL master‑slave replication, load balancing, NFS sharing, and finally integrating NoSQL caching to handle millions of daily visits.

Architecture Digest
Architecture Digest
Architecture Digest
Evolution of an LNMP Architecture: From a Single Server to a Scalable Multi‑Node Deployment

One day I impulsively created a site based on the LNMP stack; initially only I accessed it, but as traffic grew the architecture evolved.

1. Single Machine

With less than 100 daily unique visitors, a 1‑core 1 GB CentOS server running nginx, php‑fpm and MySQL was sufficient.

2. One Machine Becomes Two

When daily UV exceeded 5,000, a separate DB server was added to run MySQL, while the original server handled only nginx and php‑fpm. Because PHP depends on MySQL during compilation, MySQL must still be installed on the web server even if it is not used at runtime.

3. Adding Memcached

As traffic continued to rise, a memcached service was deployed on the upgraded web server (CPU and memory increased) and the php‑memcache extension was installed to relieve database pressure.

4. Adding a Web Server and MySQL Master‑Slave

When UV reached 50,000, a second web server was added and a MySQL slave was introduced for data backup. DNS round‑robin was used for simple load distribution, while NFS shared a directory between the two web nodes to keep uploaded files consistent.

5. MySQL Read‑Write Splitting

With daily UV climbing to hundreds of thousands, slow queries persisted despite tuning, so read‑write splitting was implemented either by modifying application code or by inserting a mysql‑proxy layer.

6. Avoiding Single Points with Load Balancing

After a web node failed under peak load, a dedicated load balancer (nginx) was placed in front of the web servers to provide high availability; the NFS server remained a single point, suggesting a separate NFS host for better resilience.

7. Further Expansion

When UV hit 1 million, additional web servers were added and the NFS service was split into a primary and a standby instance.

8. Introducing NoSQL

At roughly 10 million UV, five web servers were required and MySQL began to show bottlenecks on specific heavy queries; high‑traffic data was moved to Redis (with master‑slave replication) to offload the database.

9. MySQL Architecture Evolution

The article outlines five stages of MySQL scaling for a site like Discuz, ranging from a single‑node setup for <10 k daily PV, to multi‑master‑slave clusters, hierarchical replication, and finally sharding and database partitioning to support tens of millions of daily PV.

SDCC 2017·Shanghai station will be held on March 17‑19, 2017, featuring three technical summits and 24 speakers discussing operations, databases, and architecture; registration details and discount codes are provided.
architectureload balancingcachingmysqlNoSQLscalingLNMP
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.