Backend Development 6 min read

Leveraging OpenResty, Nginx, and Lua for High‑Performance Caching and Dynamic Content Delivery

This article explains how to use OpenResty with Lua to integrate Redis directly into Nginx for cache‑driven request forwarding, data compression, timed updates, single‑process scheduling, and configurable URL caching, thereby improving concurrency, reducing latency, and enhancing backend resilience.

Top Architect
Top Architect
Top Architect
Leveraging OpenResty, Nginx, and Lua for High‑Performance Caching and Dynamic Content Delivery

1. OpenResty

OpenResty is a high‑performance web platform built on Nginx and Lua, bundling many Lua libraries, third‑party modules, and dependencies, enabling developers to create highly concurrent, extensible dynamic web applications, services, and gateways.

The access‑layer cache is implemented by extending OpenResty with Lua code.

2. Nginx + Redis

The typical architecture routes HTTP requests through Nginx to Tomcat, which then reads data from Redis; this chain is serial and blocks if Tomcat fails or its threads are exhausted.

By using OpenResty’s lua‑resty‑redis module, Nginx can access Redis directly without consuming Tomcat threads, allowing continued service when Tomcat is down and reducing response time.

3. Compression to Reduce Bandwidth

Data larger than 1 KB is compressed by Nginx before being stored in Redis, which speeds up Redis reads and lowers bandwidth usage.

Improves Redis read speed.

Reduces bandwidth consumption.

Compression adds CPU overhead; for data under 1 KB, skipping compression yields higher TPS.

OpenResty does not provide a Redis connection‑pool implementation, so a custom Lua pool is required; examples are available at http://wiki.jikexueyuan.com/project/openresty/redis/out_package.html .

Redis values are stored as JSON objects, e.g., {length:xxx,content:yyy} , where content is the compressed page and length records the original size to decide whether decompression is needed.

Compression can be performed with the lua‑zlib library.

4. Timed Updates

A timer (steps 1 and 2 in the diagram) periodically triggers an Nginx Lua timer to request a Tomcat page URL, storing the returned HTML in Redis.

Cache TTL can be set long (e.g., 1 hour) to tolerate Tomcat failures, while the update interval can be short (e.g., 1 minute) to keep the cache fresh.

5. Request Forwarding

When a browser requests a page, Nginx first tries to fetch the HTML from Redis.

If Redis contains the page, it is returned directly.

If Redis misses, Nginx fetches the page from Tomcat, updates Redis, and returns the HTML to the client.

6. Single‑Process Timed Update

All Nginx worker processes handle request forwarding, but only worker 0 runs the periodic task that updates Redis. The worker ID is obtained via ngx.worker.id() .

7. Configurability

The backend management UI allows configuring cacheable URLs, TTL, and update intervals, e.g., modify?url=index&&expire=3600000&&intervaltime=300000&&sign=xxxx . The sign is a signature generated from the same parameters using a secret key; Nginx validates the signature to authorize configuration changes.

backend developmentrediscachingweb performancenginxLuaOpenResty
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.