Backend Development 11 min read

Performance Benchmark of Popular RPC Frameworks: Dubbo, Motan, rpcx, gRPC, and Thrift

This article benchmarks four widely used RPC frameworks—Dubbo, Motan, rpcx, gRPC, and Thrift—by measuring throughput, average latency, median latency, and maximum latency under varying client concurrency levels, revealing rpcx’s superior performance and the impact of serialization choices.

Architecture Digest
Architecture Digest
Architecture Digest
Performance Benchmark of Popular RPC Frameworks: Dubbo, Motan, rpcx, gRPC, and Thrift

Based on a source article, several popular RPC frameworks—Dubbo (Alibaba), Motan (Weibo), rpcx (Go), gRPC (Google), and Thrift (Apache)—are introduced, highlighting their origins, language support, and integration characteristics.

Performance is a critical factor for RPC frameworks because they are often deployed in high‑concurrency service environments, directly affecting service quality and hardware costs.

The benchmark uses a unified service that implements both server and client sides for each framework. The message payload is defined with a Protocol Buffers file:

syntax = "proto2";
package main;
option optimize_for = SPEED;
message BenchmarkMessage {
  required string field1 = 1;
  optional string field9 = 9;
  optional string field18 = 18;
  optional bool   field80 = 80 [default=false];
  optional bool   field81 = 81 [default=true];
  required int32  field2 = 2;
  required int32  field3 = 3;
  optional int32  field280 = 280;
  optional int32  field6 = 6 [default=0];
  optional int64  field22 = 22;
  optional string field4 = 4;
  repeated fixed64 field5 = 5;
  optional bool   field59 = 59 [default=false];
  optional string field7 = 7;
  optional int32  field16 = 16;
  optional int32  field130 = 130 [default=0];
  optional bool   field12 = 12 [default=true];
  optional bool   field17 = 17 [default=true];
  optional bool   field13 = 13 [default=true];
  optional bool   field14 = 14 [default=true];
  optional int32  field104 = 104 [default=0];
  optional int32  field100 = 100 [default=0];
  optional int32  field101 = 101 [default=0];
  optional string field102 = 102;
  optional string field103 = 103;
  optional int32  field29 = 29 [default=0];
  optional bool   field30 = 30 [default=false];
  optional int32  field60 = 60 [default=-1];
  optional int32  field271 = 271 [default=-1];
  optional int32  field272 = 272 [default=-1];
  optional int32  field150 = 150;
  optional int32  field23 = 23 [default=0];
  optional bool   field24 = 24 [default=false];
  optional int32  field25 = 25 [default=0];
  optional bool   field78 = 78;
  optional int32  field67 = 67 [default=0];
  optional int32  field68 = 68;
  optional int32  field128 = 128 [default=0];
  optional string field129 = 129 [default="xxxxxxxxxxxxxxxxxxxxx"];
  optional int32  field131 = 131 [default=0];
}

The equivalent Thrift definition is:

namespace java com.colobu.thrift
struct BenchmarkMessage {
  1: string field1,
  2: i32    field2,
  3: i32    field3,
  4: string field4,
  5: i64    field5,
  6: i32    field6,
  7: string field7,
  9: string field9,
 12: bool   field12,
 13: bool   field13,
 14: bool   field14,
 16: i32    field16,
 17: bool   field17,
 18: string field18,
 22: i64    field22,
 23: i32    field23,
 24: bool   field24,
 25: i32    field25,
 29: i32    field29,
 30: bool   field30,
 59: bool   field59,
 60: i32    field60,
 67: i32    field67,
 68: i32    field68,
 78: bool   field78,
 80: bool   field80,
 81: bool   field81,
100: i32    field100,
101: i32    field101,
102: string field102,
103: string field103,
104: i32    field104,
128: i32    field128,
129: string field129,
130: i32    field130,
131: i32    field131,
150: i32    field150,
271: i32    field271,
272: i32    field272,
280: i32    field280,
}
service Greeter {
  BenchmarkMessage say(1:BenchmarkMessage name);
}

A simple service used for the benchmark is defined as:

service Hello {
  // Sends a greeting
  rpc Say (BenchmarkMessage) returns (BenchmarkMessage) {}
}

The test environment consists of two identical machines (one server, one client) with the following specifications:

CPU: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 24 cores
Memory: 16G
OS: Linux 2.6.32-358.el6.x86_64, CentOS 6.4
Go: 1.7
Java: 1.8
Dubbo: 2.5.4‑SNAPSHOT (2016‑09‑05)
Motan: 0.2.2‑SNAPSHOT (2016‑09‑05)
gRPC: 1.0.0
rpcx: 2016‑09‑05
Thrift: 0.9.3 (java)

Benchmarks were run with client concurrency levels of 100, 500, 1000, 2000, and 5000, recording throughput (calls per second), average latency, median latency, and maximum latency. Success rates were 100% for all tests.

Throughput results (see image) show rpcx far ahead, while Dubbo and Motan degrade sharply under high concurrency. Thrift performs better than gRPC, Dubbo, and Motan at low concurrency but also drops with more clients.

Average latency results (see image) are consistent with throughput: rpcx achieves the lowest average latency (<30 ms), while Dubbo’s latency grows significantly with many clients.

Median latency (see image) highlights gRPC as the best performer in this metric.

Maximum latency (see image) shows rpcx’s worst‑case response time stays below 1 second, Motan stays under 2 seconds, while the other frameworks exhibit longer tails.

In conclusion, the benchmark demonstrates that rpcx delivers the best overall performance among the tested RPC frameworks, with gRPC excelling in median latency and Thrift offering a reasonable trade‑off. The article invites readers to provide feedback and suggests extending the study to additional frameworks.

performanceRPCDubbogRPCbenchmarkThriftrpcxMotan
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.