Artificial Intelligence 18 min read

Evolution and Technical Practices of Du Xiaoman Risk Control Decision Engine

This article presents a comprehensive overview of Du Xiaoman's risk control system evolution—from early rule‑based engines to AI‑enhanced intelligent decision engines—detailing technical practices such as strategy iteration acceleration, decision latency reduction, parallel workflow design, and future trends in data quality, automated strategy optimization, and real‑time analytics.

DataFunSummit
DataFunSummit
DataFunSummit
Evolution and Technical Practices of Du Xiaoman Risk Control Decision Engine

The presentation by Gong Wen, senior technology expert at Du Xiaoman, introduces the evolution and future outlook of the company's risk control system, highlighting its architectural milestones and technical innovations.

The system progressed through four generations: an embedded rule mode, a domain‑specific expert system, a generic decision engine based on Blaze, and finally an intelligent decision engine that leverages machine‑learning models and big‑data for enhanced decision making.

From 2015 to 2020 the decision engine evolved from a purchased Blaze solution to a self‑built platform, and since 2021 it has incorporated AI and big‑data techniques; this transition dramatically increased strategy‑iteration efficiency (from fewer than 10 to over 700 iterations per month) and cut decision latency (from more than 20 seconds to around 5 seconds).

The parallel workflow engine was designed with four key techniques—runtime parallelism, dependency analysis, variable pre‑fetch, and gray‑execution—to address challenges such as long database transactions, dependency handling, and maintaining functional parity with existing activity‑based engines.

Future trends focus on data‑related improvements (quality control, visual variable processing, data value assessment, and standardized usage), intelligent decision enhancements (correctness, explainability, automated strategy diagnosis, full‑link tracing), and real‑time analysis and alerting capabilities.

The Q&A session covered topics such as variable‑center management, handling of parallel nodes, strategy auto‑diagnosis, data value evaluation, database performance considerations, cost control for variable pre‑fetch, and the rationale behind rule‑engine selections.

machine learningdata qualityrisk controldecision engineparallel workflow
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.