Operations 6 min read

Why Random 502 Errors Appear After PaaS Migration? Nginx‑Ingress‑uwsgi Insights

After moving an application to a PaaS platform, intermittent 502 errors occur due to Nginx’s retry behavior, HTTP/1.1 connection reuse, and a mismatch between Ingress and uwsgi protocols, which can be diagnosed through error statistics and packet capture.

Efficient Ops
Efficient Ops
Efficient Ops
Why Random 502 Errors Appear After PaaS Migration? Nginx‑Ingress‑uwsgi Insights

Specific Phenomenon

After migrating the application to our PaaS platform, occasional 502 errors appear, as shown in the image below.

Why Only POST Requests Are Seen

Readers might think only POST requests are logged because the ELK filter is set to POST, but GET requests can also generate 502 errors. Nginx retries GET requests, producing log entries like the following:

The retry mechanism is Nginx’s default

proxy_next_upstream

behavior ( nginx.org ).

Because GET is considered idempotent, Nginx retries it when an upstream returns 502, while POST is not retried. The root cause of the 502 is the same for both methods.

Network Topology

When a request enters the cluster, the flow is:

<code>user request => Nginx => Ingress => uwsgi</code>

We keep Nginx in the chain for historical reasons, even though Ingress is also present.

Statistical Investigation

Error statistics for Nginx and Ingress show that 502 errors occur at the same rate, indicating the problem lies between Ingress and uwsgi.

Ingress&lt;=&gt;uwsgi

Packet Capture

After other methods failed, we captured the traffic.

Capture result:

The TCP connection is reused; Ingress, using HTTP/1.1, attempts to send a second HTTP request on the same connection, but uwsgi does not support HTTP/1.1, so the second request is rejected, causing a 502. GET requests are retried, but POST requests are not, which explains why POST 502s appear in the statistics.

Ingress Configuration Learning

Ingress defaults to HTTP/1.1 for upstreams, while our uwsgi uses an

http-socket

, not an

http11-socket

. This protocol mismatch leads to unexpected 502 errors.

Solution: force the HTTP version used by Ingress.

<code>{% if keepalive_enable is sameas true %}
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
{% else %}
nginx.ingress.kubernetes.io/proxy-http-version: "1.0"
{% endif %}</code>

Summary

Packet capture quickly identified the root cause. For multi‑hop request chains, first compare error counts of Nginx and Ingress; if they match, focus on Ingress. This approach speeds up fault isolation across complex routing layers.

kubernetesTroubleshootingnginxingress502uwsgi
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.