Artificial Intelligence 10 min read

DKCF Trustworthy Framework for Large Model Applications and AI Security Practices

The article outlines the DKCF (Data‑Knowledge‑Collaboration‑Feedback) trustworthy framework presented at the 2024 Shanghai Cybersecurity Expo, detailing challenges of large AI models, four key trust factors, and Ant Group's practical security implementations for professional AI deployments.

AntTech
AntTech
AntTech
DKCF Trustworthy Framework for Large Model Applications and AI Security Practices

On August 2, the 2024 Shanghai Cybersecurity Expo opened in Shanghai, jointly organized by the Shanghai Information Network Security Management Association and the Shanghai Internet Industry Federation. The main forum, themed “New Integration – AI Security,” featured Ant Group Tianchen Lab Deputy Director and senior algorithm expert Zhong Zhenyu, who presented the topic “DKCF Large Model Trustworthy Framework and Cybersecurity Practice,” introducing Ant Group’s advances in trustworthy large‑model applications.

He emphasized the saying “Smart AI only helps a little, stupid AI causes big trouble,” highlighting that while large models appear omnipotent, their capabilities are often over‑optimistic and can lead to significant issues in professional domains.

Evidence from a 2023 international medical journal showed that standard AI assistance yields limited diagnostic improvement, and biased models can even degrade accuracy, underscoring the need for careful verification.

The challenges can be summarized into four aspects: inference verification residual, professional knowledge engineering, feedback‑loop efficiency, and security single points.

1. Inference verification residual – Large models cannot reliably distinguish “known” from “unknown,” leading to hallucinations. Critical tasks therefore require verification of reasoning steps and detection of residual errors.

2. Professional knowledge engineering – True experts differ from pseudo‑experts by deep domain understanding. General‑purpose models, trained on publicly available data, lack the specialized knowledge that must be curated by experts.

3. Feedback‑loop efficiency – Modern control systems rely on fast feedback, yet GPT‑style models update knowledge through costly SFT or RLHF cycles, making rapid incorporation of domain‑specific changes inefficient.

4. Security single points – Handing all Retrieval‑Augmented Generation (RAG) and tool access to a model creates severe security risks, including potential leakage of sensitive information and privilege escalation if the model bypasses safeguards.

The DKCF framework—Data, Knowledge, Collaboration, Feedback—integrates these elements into a cohesive, trustworthy architecture for professional large‑model deployment.

Using an engine analogy, the model’s “intelligence engine” provides basic logical, mathematical, knowledge‑base, and external‑call capabilities, while the “knowledge supply” (maps) offers domain‑specific information essential for tasks such as autonomous driving.

To achieve inference white‑boxing, tasks are decomposed into verifiable sub‑steps, each validated against standard operating procedures, and residuals are identified when the model lacks sufficient information or capability.

Collaboration and feedback involve multiple intelligent agents (planning, orchestration, etc.) working together, with verification mechanisms that feed back residual errors to steer the process correctly.

From a security perspective, Ant Group proposes two paradigms: OVTP (Operator‑Voucher‑Traceable Paradigm) and NbSP (Non‑bypassable Security Paradigm). OVTP mandates that access decisions be based on the operator’s chain and voucher credentials, while NbSP ensures that critical security checkpoints cannot be bypassed.

In practice, Ant Group has applied the DKCF framework to its security operations, building intelligent agents for massive alert handling, knowledge graph construction, and SOP‑driven reasoning, thereby processing billions of security logs daily with minimal hallucination.

Overall, the DKCF trustworthy framework consolidates essential elements for safe, reliable large‑model use in professional fields, and its continued adoption is expected to drive transformative advances in AI applications.

large language modelstrustworthy AIAI safetyfeedback loopssecurity frameworkDKCFknowledge engineering
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.