Artificial Intelligence 16 min read

Building a ChatGPT‑Based Intelligent Customer Service System with BERT Classification and Knowledge Filtering

This article describes how to construct an intelligent customer‑service assistant using ChatGPT for natural‑language understanding, BERT for user‑question classification, and Sentence‑BERT for knowledge‑selection, detailing system architecture, prompt design, model training, performance results, and practical cost reductions.

Zhuanzhuan Tech
Zhuanzhuan Tech
Zhuanzhuan Tech
Building a ChatGPT‑Based Intelligent Customer Service System with BERT Classification and Knowledge Filtering

Traditional customer service in e‑commerce is labor‑intensive and data‑dense, causing long response times and poor user experience. Leveraging Large Language Models (LLMs) such as ChatGPT can provide fast, accurate assistance by injecting private product and quality‑inspection data.

The system architecture consists of three modules: a user‑question classification module, a knowledge‑filtering module, and a ChatGPT‑based response module. Phase 1 implements only the ChatGPT module; Phases 2 and 3 add classification and knowledge filtering respectively.

2. Building a ChatGPT‑based assistant

ChatGPT, a generative pre‑trained transformer (GPT) series model, uses prompt learning and reinforcement learning from human feedback to follow instructions. A well‑designed prompt includes task description, background knowledge injection, output format, examples, and input data. An example prompt for the e‑commerce scenario is:

Now you are an e‑commerce客服. Answer the user based on the provided product_info {$product_info} and quality_info {$quality_info}. Provide concise answers.

Sample interaction shows the model correctly answering battery‑related queries using the injected knowledge.

3. Phase‑2 optimization: User‑question classification

A BERT model fine‑tuned on ChatGPT‑generated labels classifies user queries into six categories (product parameters, quality‑report, after‑sale, platform policy, promotion, chit‑chat). The training data are generated by prompting ChatGPT to label questions, reducing manual annotation effort.

The classifier is implemented as:

Import torch.nn as nn
from transformers import BertModel
class Classifier(nn.Module):
    def __init__(self, bert_model_path, num_classes):
        super(Classifier, self).__init__()
        self.bert = BertModel.from_pretrained(bert_model_path)
        self.dropout = nn.Dropout(0.1)
        self.fc = nn.Linear(768, num_classes)
    def forward(self, tokens, masks):
        _, pooled_output = self.bert(tokens, attention_mask=masks, return_dict=False)
        x = self.dropout(pooled_output)
        x = self.fc(x)
        return x

After fine‑tuning, the model achieves 76.6% accuracy (macro‑F1 0.754), with higher F1 for product‑parameter (0.791) and quality‑report (0.841) classes. Deploying this classifier reduces the cost per conversation to ¥0.033, a 72.5% saving.

4. Phase‑3 optimization: Knowledge filtering

Instead of injecting the entire product description and quality report, the system uses Sentence‑BERT to compute cosine similarity between the user question and each knowledge item, selecting the top‑5 most relevant product details and quality items. This reduces token usage from 1,228 to 132 (≈89% reduction) and speeds up response generation.

Example: for the question “What is the battery capacity?”, the filtered knowledge includes battery capacity, health, charge cycles, etc., allowing ChatGPT to answer succinctly.

5. Conclusion

LLMs combined with lightweight classification and semantic search can dramatically improve efficiency and cost‑effectiveness of customer‑service operations. The described ChatGPT‑based system is live in production, enhancing service quality and throughput.

6. References

[1] Ouyang et al., “Training language models to follow instructions with human feedback”, 2022. [2] Brown et al., “Language Models are Few‑Shot Learners”, 2020. [3] Devlin et al., “BERT: Pre‑training of Deep Bidirectional Transformers for Language Understanding”, 2018. [4] Reimers & Gurevych, “Sentence‑BERT: Sentence Embeddings using Siamese BERT‑Networks”, 2019.

LLMprompt engineeringChatGPTBERTIntelligent Customer ServiceSentence-BERTKnowledge Filtering
Zhuanzhuan Tech
Written by

Zhuanzhuan Tech

A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.