Artificial Intelligence 12 min read

Model Iteration and Architecture of the BangBang Intelligent Customer Service QABot

This article details the BangBang intelligent customer service system's overall architecture, core capabilities, knowledge‑base construction, and successive model upgrades—from FastText to TextCNN, Bi‑LSTM, and model fusion—showing how each iteration improved accuracy, recall, and F1 scores toward a stable 95% performance level.

58 Tech
58 Tech
58 Tech
Model Iteration and Architecture of the BangBang Intelligent Customer Service QABot

The BangBang intelligent customer service, developed by 58.com for real‑estate agents, aims to deliver accurate, fast, and efficient Q&A services, reducing manual客服 costs and improving operational efficiency.

The system’s overall technical architecture comprises a main service, ABTest service, annotation system, knowledge discovery module, and intent‑recognition component, all configurable for rapid scenario integration.

Core capabilities include intent recognition that determines whether to provide a unique answer, a list of possible answers, self‑service actions (e.g., displaying violating posts or creating tickets), or reject the query as unrelated chatter.

The QABot relies on a knowledge base built from standard and expanded question pairs; matching is performed via keyword matching and a multi‑class classification model where each knowledge‑base entry represents a class.

Model iteration began with FastText (dim=256, wordNgrams=3, bucket=300000, lr=1e‑3, epoch=200), achieving 81.45% accuracy, 95.34% recall, and an F1 score of 0.8785.

Switching to TextCNN (embedding=256, filters=[2,3,3,4,3]×64, batch=128, epoch=80, lr=1e‑4) raised accuracy to 90.13%, recall to 92.89%, and F1 to 0.9149.

A two‑layer bidirectional LSTM model (hidden_layers=64, rnn_dim=128, max length=80, lr=1e‑4, batch=128, 60 epochs) further improved performance to 93.94% accuracy, 92.51% recall, and an F1 of 0.9322.

Finally, model fusion combined a Bi‑LSTM for answer‑type prediction (Model A) and another Bi‑LSTM for intent prediction (Model B), using threshold‑based strength levels; this approach boosted metrics to 94.89% accuracy, 93.33% recall, and an F1 of 0.9410.

In summary, the QABot’s metrics have stabilized around 95%; future work will explore stronger pre‑trained models such as BERT and further knowledge‑base enhancements to continue improving answer quality and self‑service coverage.

AIcustomer serviceModel FusionLSTMText ClassificationTextCNNfastText
58 Tech
Written by

58 Tech

Official tech channel of 58, a platform for tech innovation, sharing, and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.