Cloud Native 13 min read

Monolithizing tRPC-Go Microservices: Architecture, Implementation, and Performance Gains

The article shows how to monolithize selected tRPC‑Go microservices by defining protobuf‑generated Go interfaces and swapping RPC proxies for in‑process implementations via a proxy API, cutting CPU usage by 61% while keeping microservice flexibility and offering best‑practice guidelines for Go service design.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Monolithizing tRPC-Go Microservices: Architecture, Implementation, and Performance Gains

Microservices are a fundamental building block for service governance, especially in cloud‑native environments where each Kubernetes pod typically hosts an independent service. While microservices bring clear benefits—lower coupling, independent deployment, clear input/output contracts, and easier troubleshooting—they also introduce significant overhead, such as increased network traffic, higher CPU consumption for serialization/deserialization, and complex service governance.

This article presents a practical approach to achieve a "both‑and" solution: retaining the advantages of microservices while reducing their overhead by monolithizing certain services using the open‑source tRPC‑Go framework. The method is based on treating RPC calls as ordinary Go interfaces, allowing client and server sides to share the same interface implementation and replace RPC calls with in‑process function calls when the services are co‑located.

Key steps of the solution:

Define service interfaces in protobuf and generate Go code with tRPC‑Go. Example service definition: service FeedsRerank { rpc GetFeedList (GetFeedRequest) returns (GetFeedReply) {} }

tRPC‑Go generates a Go interface for the service and a client proxy interface. The server implements the service interface, while the client normally uses the generated proxy.

Introduce a proxy API that holds the client proxy instances. By default it creates real RPC proxies via pb.NewFeedsRerankClientProxy() , but it can be overridden to provide a mock implementation.

Implement a mock proxy that forwards calls to the local service implementation, e.g.: type rerankProxy struct { impl *rerankImpl } func (r *rerankProxy) GetFeedList(ctx context.Context, req *pb.GetFeedRequest, opts ...client.Option) (*pb.GetFeedReply, error) { rsp := &pb.GetFeedReply{} err := r.impl.GetFeedList(req, rsp) return rsp, err }

Register the mock proxy with the proxy API so that client code transparently calls the in‑process implementation instead of performing network RPC.

The article also shows the skeleton of the proxyapi package, which centralizes all client proxies and provides getter/setter methods for each service. Code generation via go generate and a shell script reduces boilerplate.

Performance evaluation:

Before monolithization, the recommendation system (five microservices) required roughly 18,000 CPU cores at the target capacity. After applying the in‑process proxy approach without changing business logic, the required cores dropped to about 7,000—a 61% reduction, demonstrating the heavy cost of RPC overhead. Further algorithmic and caching optimizations brought the requirement down to around 1,000 cores.

Even with a monolithized core, other tenants can still run their services as independent microservices, preserving flexibility. The article concludes with best‑practice recommendations for building Go services that can switch between microservice and monolith deployments:

Expose functionality through Go interface s to hide implementation details.

Prefer dependency injection over heavy init logic for component initialization.

Keep packages small, focused, and loosely coupled.

Overall, the method is applicable to other Go frameworks (e.g., Gin, gRPC) and provides a low‑impact path to reduce RPC overhead while maintaining a microservice‑friendly codebase.

performancecloud-nativeGomicroservicesmonolithtRPC
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.