Evolution of 58BaaS: From Hyperledger Fabric Network Construction to Client API, Chaincode Management, Debugging, and Monitoring
This article details the end‑to‑end evolution of 58BaaS, covering blockchain fundamentals, the three‑stage Hyperledger Fabric network deployment, Kubernetes‑based chaincode container management, a simplified client API with configuration files, performance and robustness enhancements, debugging tools, and a Prometheus‑Grafana monitoring solution.
Blockchain combines distributed storage, peer‑to‑peer transmission, consensus mechanisms, cryptographic algorithms, and smart contracts, representing a disruptive computing model after mainframes, PCs, and the Internet.
To align blockchain technology with 58 Group’s business, the 58 Blockchain Lab built the 58BaaS platform, offering a full suite of services such as deployment, contract development, real‑time monitoring, and elastic scaling, enabling rapid, low‑cost blockchain infrastructure for business innovation.
The underlying network was built in three stages:
Stage 1 : A single YAML file described the whole network and was deployed with Docker‑Compose on one server, running endorsement, ordering, and other nodes together.
Stage 2 : The network was split across multiple servers; the YAML was manually divided into separate files for each organization and node type (peers, orderers, Kafka, Zookeeper) and deployed via Docker’s native overlay network.
As the number of networks grew, manual deployment became error‑prone, prompting the creation of standardized YAML templates (peer.yaml, orderer.yaml, ca.yaml, kafka.yaml, zookeeper.yaml). Example of peer.yaml :
services: ${NODE_NAME}: container_name: ${NODE_NAME} environment: - CORE_PEER_ID=${NODE_NAME} - CORE_PEER_LOCALMSPID=${ORG_MSPID} ... ports: - ${PORT}:7051 - ${CC_PORT}:7052
To avoid resource waste, a shared Kafka and Zookeeper cluster was introduced, with a global system channel name (testchainid) and channel‑specific Kafka topics; conflicts are resolved by specifying the channel ID during genesis block creation.
Stage 3 : Kubernetes was adopted to manage all nodes uniformly, using Service and Deployment objects; peer and orderer data are persisted on disks to survive restarts. Port mapping on the host is required for external access, but exposing only the API gateway eliminates port‑conflict issues.
Client API Design
Simplicity : Most parameters are placed in a configuration file, allowing the client to invoke chaincode with minimal code. Example:
BlockChainConfig.initConfig("blockchain_client.yaml");
EasyBlockchainClient client = new EasyBlockchainClient("orgName");
client.invoke("funcName",args).thenApply(blockchainResponse -> {
blockchainResponse.getTxId();
});Configuration file (blockchain_client.yaml) snippet:
clients:
orgName:
orgkey: xxx
channel: xxx
chaincodes:
chaincode1: &xxx
name: xxx
version: xxx
functions:
invoke:
name: xxx
orgs: xxx;xxx
chaincode: <<: *xxxPerformance : The Fabric SDK was optimized by wrapping synchronous APIs into asynchronous ones and limiting node access to a single node per organization, dramatically improving latency.
Robustness : Service Discovery (Fabric 1.2) now dynamically discovers orderer, peer, chaincode, and endorsement policy information, simplifying client logic compared with the static topology of Fabric 1.1.
Transaction handling notes: MVCC validation failures in high‑concurrency scenarios should trigger retries, and client‑side timeouts can be mitigated by checking the transaction ID before retrying.
Innovative Design
K8s Management of Chaincode Containers
Peers and orderers are already managed by Kubernetes, but chaincode containers are created via Docker API from the peer node, outside of K8s. Introducing a K8sVM implementation of the Fabric VM interface allows chaincode containers to be launched and managed by Kubernetes.
Docker multi‑stage builds are used to keep chaincode images lightweight. The first stage builds the Go chaincode binary in a temporary container; the second stage packages the binary into the final image.
Fabric’s VM interface:
type VM interface {
Deploy(ctxt context.Context, ccid ccintf.CCID, args []string, env []string, reader io.Reader) error
Start(ctxt context.Context, ccid ccintf.CCID, args []string, env []string, filesToUpload map[string][]byte, builder BuildSpecFactory, preLaunchFunc PrelaunchFunc) error
Stop(ctxt context.Context, ccid ccintf.CCID, timeout uint, dontkill bool, dontremove bool) error
Destroy(ctxt context.Context, ccid ccintf.CCID, force bool, noprune bool) error
GetVMName(ccID ccintf.CCID, format func(string) (string, error)) (string, error)
}Two implementations exist (DockerVM, InprocVM); adding K8sVM enables unified lifecycle management via the Kubernetes API.
Chaincode Debugging
Traditional deployment creates many temporary images and containers, hindering rapid development. Fabric’s development mode runs chaincode directly as an executable, allowing IDE debugging without building images. 58BaaS also provides a visual online editor that abstracts these details.
Debugging runs inside an isolated Docker environment for security.
Multi‑Dimensional Monitoring
Prometheus collects metrics from exporters deployed on every Peer, Orderer, and CA node in the Kubernetes cluster; Grafana visualizes the data. Alertmanager aggregates and suppresses duplicate alerts before notifying users.
Summary : The article presents the complete evolution of 58BaaS built on Hyperledger Fabric, covering the three‑stage network construction, client API design, Kubernetes‑based chaincode management, debugging facilities, and a Prometheus‑Grafana monitoring system. The platform also supports Ethereum networks and continues to evolve.
Interested candidates can send their resumes to [email protected] and try the demo “Magic Jianghu” mini‑program via the QR code below.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.