Implementing Sentinel for Traffic Protection and Rate Limiting in a Large-Scale Restaurant Digital Platform
This article details how a large restaurant chain leveraged the open‑source Sentinel framework to implement comprehensive traffic protection, rate limiting, and circuit‑breaking across millions of daily orders, describing challenges, design choices, high‑availability rule distribution, monitoring, user‑experience considerations, and providing Java code examples for integration.
The digital transformation of the restaurant industry has created massive traffic spikes and stability challenges for online ordering systems. To protect the application layer, the team selected the open‑source Sentinel framework because of its community support, lightweight performance, rich flow‑control strategies (QPS, CPU, hotspot, circuit‑break), real‑time monitoring, multi‑language compatibility, and high customizability.
Key requirements included high‑availability rule maintenance, persistent observability data, low operational cost, friendly throttled‑user experience, and rapid enable/disable of limits during incidents. The solution architecture integrates Sentinel with a configuration center for hot‑updating rules, employs non‑intrusive low‑code starters, and wraps custom interceptors to enforce limits without code changes.
public class SentinelDataSourceListener implements InitializingBean { @Override public void afterPropertiesSet() throws Exception { initFlowRules(namespaceName); } private void initFlowRules(String namespaceName) { // Register configuration center data source ReadableDataSource > flowRuleDataSource = new DataSource<>(namespaceName, ConfigUtil.FLOW_DATA_ID_POSTFIX, "[]", source -> JSON.parseObject(source, new TypeReference >() {})); FlowRuleManager.register2Property(flowRuleDataSource.getProperty()); ReadableDataSource > degradeRuleDataSource = new DataSource<>(namespaceName, ConfigUtil.DEGRADE_DATA_ID_POSTFIX, "[]", source -> JSON.parseObject(source, new TypeReference >() {})); ReadableDataSource > systemRuleDataSource = new DataSource<>(namespaceName, ConfigUtil.SYSTEM_DATA_ID_POSTFIX, "[]", source -> JSON.parseObject(source, new TypeReference >() {})); SystemRuleManager.register2Property(systemRuleDataSource.getProperty()); } }
public class SentinelInterceptor extends AbstractSentinelInterceptor { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { Entry entry = null; List params = new ArrayList<>(); List hotKeyValue = new ArrayList<>(); try { String resourceName = getResourceName(request); if (StringUtil.isEmpty(resourceName)) { return true; } if (increaseReferece(request, this.config.getRequestRefName(), 1) != 1) { return true; } String origin = parseOrigin(request); String contextName = getContextName(request); ContextUtil.enter(contextName, origin); RequestWrapper requestWrapper = null; if (request instanceof RequestWrapper) { requestWrapper = (RequestWrapper) request; } if (hotKeyConverter == null) { hotKeyConverter = new DefaultHotKeyConverter(); } hotKeyValue = hotKeyConverter.handleHotKey(request, requestWrapper, request, params, environment); SentinelMetrics.requestCount(resourceName); entry = SphU.entry(resourceName, EntryType.IN, 1, hotKeyValue.toArray()); request.setAttribute(config.getRequestAttributeName(), entry); return true; } catch (BlockException e) { try { handleBlockException(request, response, e); } finally { ContextUtil.exit(); } return false; } } }
public class PrometheusMetricExtension implements MetricExtension { @Override public void addPass(String resource, int n, Object... args) { SentinelMetrics.passRequests(resource, n); } @Override public void addBlock(String resource, int n, String origin, BlockException ex, Object... args) { SentinelMetrics.blockRequests(resource, ex.getClass().getSimpleName(), ex.getRuleLimitApp(), origin, n); } @Override public void addSuccess(String resource, int n, Object... args) { SentinelMetrics.successRequests(resource, n); } }
High‑availability rule distribution is achieved through multi‑instance deployment, real‑time synchronization via the configuration center, and local caching to survive configuration outages. Monitoring and alerting are handled by collecting Sentinel logs (e.g., via ELK) and persisting key metrics to a time‑series database (e.g., VictoriaMetrics) for dashboards and alerts.
User‑experience design includes unified status codes for throttled responses, graceful degradation of non‑core APIs, and a one‑click toggle for enabling or disabling limits during emergencies, supported by clear operational procedures.
After deployment across dozens of core services, the solution flattened traffic peaks during promotional events, reduced latency spikes, and provided actionable metrics for capacity planning, demonstrating significant stability improvements with minimal impact on end‑user experience.
Future work will explore proactive traffic shaping and automatic capacity matching based on the collected observability data to further reduce the need for manual rate‑limiting.
Yum! Tech Team
How we support the digital platform of China's largest restaurant group—technology behind hundreds of millions of consumers and over 12,000 stores.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.