Applying Large Language Models in Data Management and Risk Control at Ping An One Wallet
This presentation details how Ping An One Wallet leverages large language models across five key areas—current application status, data management, risk control, technical architecture, and a Q&A session—highlighting strategies such as vectorized rule storage, prompt engineering, RAG enhancements, and workflow agents to improve efficiency and accuracy in data governance and fraud detection.
The speaker from Ping An One Wallet shares the practice of large model applications focusing on data management and risk control.
Part 1 – Current status: Large model use cases include marketing outreach, pet community services, data management (classification, metadata retrieval), and risk operation.
Part 2 – Data management: Describes building a data asset management platform, vectorizing classification rules, employing prompt engineering for structured XML outputs, and using Retrieval‑Augmented Generation with a vector knowledge base for metadata search.
Part 3 – Risk control: Shows how large models reduce case handling time from 30 minutes to about one minute by integrating agents, workflow orchestration, and automated script and summary generation.
Part 4 – Technical architecture: Presents a three‑layer architecture (foundation, engineering, platform), RAG improvements such as Incomplete Utterance Rewriting (IUR), HiveToCache for faster retrieval, and rerank for better relevance, plus workflow agents for complex business processes.
Part 5 – Q&A: Addresses model latency versus cost for user profiling, asset classification practices, and the challenges of metadata lineage and security in large‑model‑driven data pipelines.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.