Seat Copilot: Design, Large‑Model Architecture, and Business Impact in Financial Services
This article introduces the Seat Copilot developed by Qifu Technology, explains its composition, design, and core large‑model architecture, details data engineering, training and evaluation processes, and presents quantitative results showing improvements in operator efficiency, conversion rates, and management productivity.
The Seat Copilot is an AI‑driven assistant that helps call‑center operators (seats) understand tasks, make decisions, and improve workflow efficiency, ultimately boosting business performance.
Its composition includes a business‑scenario indicator, technical framework, compliance steering, domain knowledge, and inference engine, analogous to a vehicle’s components such as fuel, engine, and steering wheel.
In design, the Copilot is embedded into sales and customer‑service systems, providing role‑specific guidance, performance summaries, and real‑time suggestions to both operators and managers, thereby reducing information gaps during multi‑turn interactions and handovers.
The core large model consists of five modules—data, data engineering, the base model, application integration, and evaluation. Data combines generic and financial‑domain sources, with internal knowledge graphs and behavior data forming the proprietary portion (≈10%). Data engineering handles cleaning, de‑duplication, privacy removal, and tokenization.
Model construction follows three stages: domain knowledge injection, capability injection (reading comprehension, intent analysis, task classification), and task‑specific fine‑tuning. Supervised fine‑tuning (SFT) mixes instruction data, internal data, and generic data in two phases to preserve cross‑disciplinary abilities while emphasizing financial expertise.
Evaluation covers subjective and objective questions across basic, advanced, and professional levels, using human review, automated metrics, and large‑model‑assisted scoring. The assessment also measures ethical and safety compliance.
Effect analysis shows a 4.1% increase in operator efficiency, a 5.6% rise in conversion rate, and a 50% improvement in management efficiency, with incremental version updates yielding modest accuracy and usage gains.
The Q&A section clarifies that performance gains are observed over roughly nine months, proprietary data accounts for about 10% of pre‑training data, and role‑specific data ratios are adjusted per iteration without a fixed numeric target.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.