Cloud Native 16 min read

Unlocking Container Success: 5 Ways Application Design Drives Performance

This article, derived from the Efficient Operations Community Talk, explains how thoughtful application design—covering startup time, node placement, data structures, middleware choices, and architecture—determines the true performance and stability benefits when deploying containers.

Efficient Ops
Efficient Ops
Efficient Ops
Unlocking Container Success: 5 Ways Application Design Drives Performance

This article is based on the Efficient Operations Community Talk online sharing.

Guest Introduction

Da Hao Co‑founder and CTO of YouRong Cloud Started programming in 1984 with BASIC, has built high‑performance trading systems, and worked on communication, databases, clusters, high‑performance and grid computing, and mobile terminals. Now serves as CTO with over seven years of research experience at the Chinese Academy of Sciences, focusing on high‑performance, distributed, and grid computing.

Introduction

Today we discuss the close relationship between containers and applications.

When you treat the application well, the application treats you well. Saying “Hi, Container!” to solve all problems may lead the application to mislead you.

Containers are closer to applications than VMs, acting like a "virtual machine" for the app, just as a VM is a virtual machine for the OS.

Sharing Main Text

We use containers for applications, so the root problem is the application, not the container. A well‑designed application yields better results when supported by containers.

We discuss five key points from the perspective of application performance and stability:

Container startup time and application module initialization together determine the app’s readiness time.

The relationship between containers and nodes affects performance and stability.

The application’s data structure influences container‑based performance.

The middleware stack chosen by the application shapes container scaling.

The architectural design of the application dictates how containers are used.

1. Container Startup Time and Application Initialization

Container startup is measured in seconds, but this only reflects the OS process launch, not the full application start time, which also includes initialization of services such as data buffering.

If a data‑buffering service hasn’t finished loading, the application cannot serve requests, so the real readiness time spans from container start to service availability.

Note: Pre‑starting containers can mitigate this issue.

2. Container‑Node Relationship Choices

(1) In the first diagram, container A runs on node 1 and container B on node 2, communicating over the network.

(2) In the second diagram, both containers run on the same node, communicating via memory rather than a physical network.

When both containers are compute‑intensive, the first setup may offer more raw CPU power, but for typical workloads the difference is negligible.

The second setup reduces network overhead, which can improve efficiency.

In scenarios where one container runs the application and the other runs a database, high‑availability considerations may favor the first arrangement.

Data HA should be managed separately from other HA concerns.

3. Application Data Structure Impacts Performance

Micro‑services are often discussed alongside containers, but they are independent concepts; containers simply help micro‑services operate more effectively.

Think of a band where each musician has a dedicated instrument; assigning two people to one flute creates coordination problems.

Guidelines:

(1) For few columns from a single table, avoid over‑splitting into micro‑services.

(2) For large records, finer‑grained services per business domain are acceptable.

(3) For heavy analytical queries, column‑oriented storage can reduce read latency.

4. Middleware Choices Shape Container Scaling

Scaling containers often involves launching them on idle nodes, but middleware type influences the scaling strategy.

Common middleware categories:

(1) Scalable (2) Highly available (3) Application fail‑over (4) Database connection pooling

Example: WebLogic’s clustering and load‑balancing mechanisms sit closer to the application and database layers.

In a container platform, each middleware component can be packaged into its own image and deployed where needed, while the platform handles external load distribution.

Suggested architecture:

5. Application Architecture Determines Container Usage

Different business scenarios prioritize different aspects: some need simple request routing, others require complex backend logic and data handling.

Data is central; two approaches exist:

(1) Perform heavy calculations in stored procedures, with containers only invoking the results.

(2) Offload intensive computation to dedicated containers that sit beside the database, keeping the database focused on storage and simple queries.

When write traffic is high, stored procedures can become a bottleneck, making approach (2) preferable for scaling compute workloads.

Comparison table:

Using containers can improve performance and simplify deployment of database‑related computation.

Illustration of approach (2):

Containers should be short‑lived, started on demand, but pre‑starting a pool can improve service readiness.

Two example pools:

(1) Buffer pools – containers that cache database reads, improving read‑heavy workloads.

(2) Job pools – containers that run pre‑prepared computational tasks, simplifying task submission.

Thank you for participating; further details on container design can be discussed separately.

Q&A

Question 1: Regarding buffer pools, how can we prevent dirty reads?

Answer: Buffer pools have limited benefit for write‑intensive data; they are most effective for read‑heavy scenarios, so usage should be case‑by‑case.

Question 2: For compute‑intensive, middleware‑based applications, does containerization offer significant advantages?

Answer: Containers mainly improve testing, distribution, and deployment; middleware still handles its own management responsibilities.

Question 3: Same as Question 2 (repeated).

Answer: Same as above – containers aid deployment, while middleware remains responsible for its own functions.

Question 4: Your suggestions focus on relational databases; what about NoSQL databases that are already highly available?

Answer: High availability and sharding address different goals; sharding is for performance when data size impacts latency, and should be designed based on business data characteristics.

Question 5: Are mature containerized applications typically run on physical machines or virtual machines? Is container‑on‑VM necessary?

Answer: Containers and VMs serve different purposes; you can run containers inside VMs to partition servers, or run them directly on bare‑metal with CoreOS/RancherOS.

Question 6: Pre‑starting containers can shorten service start time, but rapid application updates may cause version mismatches. Any best practices?

Answer: Pre‑start containers after a new version is released; using different versions for pre‑started containers only helps with transition.
cloud-nativehigh availabilityMiddlewareContainersapplication performance
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.