APISIX API Gateway: Architecture, Features, Performance Comparison, and Future Outlook
This article introduces the APISIX API gateway, explaining its cloud‑native architecture built on OpenResty and Etcd, the advantages over traditional monolithic service frameworks, detailed feature breakdowns, performance benchmark comparisons with OpenResty, multi‑cluster management practices, usage scenarios, monitoring, logging, and future development directions.
Traditional monolithic service frameworks embed generic functions such as authentication, authorization, rate‑limiting, and circuit‑breaking directly into business services via AOP, leading to version management difficulties, forced service recompilation, and deployment downtime.
Introducing a gateway isolates these cross‑cutting concerns into an independent service, allowing independent iteration, language‑agnostic configuration, and reduced impact on business development.
APISIX is a high‑performance, cloud‑native API gateway built on OpenResty (Lua‑extended Nginx) and Etcd. It provides rich traffic‑management capabilities including load balancing, dynamic routing, upstream management, A/B testing, canary releases, rate limiting, circuit breaking, security, monitoring, and service observability.
APISIX Architecture : The gateway runs as an OpenResty instance where Lua scripts expose Nginx hooks. Only eight of the many Nginx hooks are used, such as init_by_lua , init_worker_by_lua , ssl_certificate_by_lua , access_by_lua , balancer_by_lua , header_filter_by_lua , body_filter_by_lua , and log_by_lua . Configuration files are generated at runtime from Lua templates:
local template = require("resty.template")
local ngx_tpl = require("apisix.cli.ngx_tpl")
local function init(env)
local yaml_conf, err = file.read_yaml_conf(env.apisix_home)
local conf_render = template.compile(ngx_tpl)
local ngxconf = conf_render(sys_conf)
local ok, err = util.write_file(env.apisix_home .. "/conf/nginx.conf", ngxconf)
endAPISIX watches Etcd for configuration changes using a timer‑based watch mechanism. For each of the eleven Etcd prefixes (e.g., /apisix/routes/ , /apisix/services/ , /apisix/plugins/ ), a dedicated config_etcd object registers an automatic fetch timer that repeatedly calls sync_data to keep every Nginx worker up‑to‑date without a separate agent process.
Performance Comparison : Benchmarks under various thread counts and concurrency levels show APISIX achieving higher requests per second, lower average latency, and lower CPU load than vanilla OpenResty. For example, with 48 threads and 200 concurrent connections, APISIX handled 234,306 req/s versus 146,805 req/s for OpenResty, a 37.34 % improvement, while maintaining comparable latency.
Multi‑Cluster Management : APISIX clusters are isolated per business line to avoid cross‑impact, with team‑based access control and budget‑based quota management. Future plans include a unified control plane for centralized certificate distribution across clusters.
Practical Usage : The gateway is employed for short‑link generation via a custom Redis‑backed plugin, gray‑release strategies using the traffic‑split plugin, dynamic service discovery through registration centers, and observability via the built‑in Prometheus plugin and Kafka‑logger for log aggregation.
Future Outlook : Enhancements target self‑service plugin management (upload via UI, automatic loading) and streamlined multi‑cluster certificate handling, aiming to reduce operational overhead as the number of clusters grows.
Yiche Technology
Official account of Yiche Technology, regularly sharing the team's technical practices and insights.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.