Backend Development 8 min read

Evolution of Large-Scale Website Architecture: From Single Server to Distributed Systems

The article traces website architecture evolution from a single‑server LAMP setup to distributed clusters, highlighting stages such as service separation, caching, load‑balanced application servers, read‑write database splitting, CDN/reverse proxy use, distributed storage, NoSQL/search integration, and finally SOA‑based business and service segmentation.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Evolution of Large-Scale Website Architecture: From Single Server to Distributed Systems

This article is part of a series of notes on "Core Principles and Practices of Large-Scale Website Technical Architecture." The author emphasizes the core value of website architecture: "Business creates technology, not the other way around" - good architecture evolves, it is not designed from scratch.

1. Initial Stage: For small-scale usage with rapid development needs, a single server deployment using LAMP stack (Linux, Apache, MySQL, PHP) is sufficient, with files, database, and application all on one server.

2. Separation of Application and Data Services: As traffic grows and storage becomes insufficient, separate file servers and database servers are deployed. File servers require more disk space, database servers need more disk and memory for caching, while application servers require better CPU for intensive business logic computations.

3. Caching for Performance: Implement local and distributed caching for frequently accessed data. Local caching is suitable for small amounts of high-usage data like blacklist checks. Distributed caching commonly uses Memcached or Redis, both with excellent scalability.

4. Application Server Clustering: Solve single server limitations (concurrency, peak load, single point of failure) through homogeneous cluster deployment with load balancing.

5. Database Read-Write Separation: Most mainstream databases support master-slave hot backup. After read-write separation, applications must consider the impact of replication lag on user experience. Middleware like Cobar can encapsulate data access transparently to applications.

6. CDN and Reverse Proxy: CDN caches static resources at the nearest ISP data centers, while reverse proxy deployed at the data center can cache static data and handle SSL termination to reduce overhead.

7. Distributed Databases and File Systems: As the website scales, single database and file servers become insufficient, requiring cluster deployment.

8. NoSQL and Search Engines: For complex data requirements like log storage, analysis, and search, introduce NoSQL databases (MongoDB, HBase) and search engines (Lucene). A unified Data Access Layer (DAL) is needed to manage multiple data sources including relational DBs, NoSQL, caches, file systems, and message queues.

9. Business Splitting and Distribution: Vertically split business services and horizontally split basic services to achieve true Service-Oriented Architecture (SOA). Simple machine addition provides linear performance growth initially but reaches bottlenecks later due to different architectural needs and traffic patterns across sub-businesses.

Distributed Systemssystem architecturemicroservicesscalabilityload balancingcachingCDNDatabase Optimization
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.