AI-Enabled Security: JD Security’s DEF CON China Presentation on Explainable AI for Security
JD Security showcased its AI‑driven security research at DEF CON China, presenting three accepted papers and a collaborative AI safety report with Penn State, detailing a black‑box explanation method using Gaussian‑mixture models to make deep‑learning decisions transparent for security applications.
DEF CON, the world’s premier cybersecurity conference founded in 1993, held its first event in China in May, where JD Security participated as a presenter and collaborator.
The company had three papers accepted for the conference, a rare achievement, and jointly demonstrated an AI safety research report with Professor Xing Xinyu’s team from Penn State during the Hack Villages session.
AI Empowering the Security Era
The presentation highlighted the growing use of deep‑learning AI systems in security domains such as black‑market disruption, malware detection, and reverse engineering, while noting that the decisions of these models remain opaque compared to traditional machine‑learning methods.
Questions like why a model classifies an image as a cat, flags software as potentially malicious, or links a user account to illicit activity illustrate the need for understandable AI decisions.
For JD Security, providing convincing explanations for each AI decision is essential to protect users without unjustly freezing accounts or mislabeling software.
To address this, JD Security’s lab collaborated with Penn State’s Professor Xing Xinyu to develop a technique that explains every AI decision, thereby safeguarding security operations.
Solution Method : The team employed a black‑box explanation approach that works regardless of the underlying deep‑learning model. By using a Gaussian Mixture Model as a surrogate, they approximate the AI system’s decision space, leveraging the model’s strong approximation and interpretability capabilities.
The significance of this work lies in the fact that, internationally, no existing system can provide reasonable explanations for security‑driven AI. Current explainable‑AI projects, such as DARPA’s, focus on non‑security applications and are unsuitable for high‑dimensional or long‑sequence security data.
This collaboration therefore represents a breakthrough in intelligent security, also enabling AI‑driven security systems to auto‑patch and self‑repair—essentially using AI to strengthen AI.
In the Hack Villages segment, the team also presented a report on “AI Parsing Systems and Crash Trace Reconstruction,” applying deep learning to analyze program crash dumps, thereby reducing the adverse impact of crashes.
When a program crashes, the operating system generates a core dump containing memory state and executed instructions. JD Security’s AI analyzes this data, reconstructs the program’s pre‑crash states, and automatically identifies root causes, enabling rapid remediation by security researchers.
This approach extends AI beyond traditional program analysis, opening a new frontier for automated crash investigation and demonstrating significant effectiveness, even though the technology is still in early stages.
By dramatically reducing the human effort and time—often hundreds to thousands of hours—required to respond to crashes, the solution lowers operational costs for JD and minimizes user‑experience degradation caused by attacks.
Overall, JD Security is actively building a global talent pool and research infrastructure for AI security, establishing Silicon Valley R&D centers, offensive‑defensive labs, and deep collaborations with leading academic institutions worldwide to advance AI‑driven security.
JD Tech
Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.