Tag

AI security

1 views collected around this technical thread.

Architecture Digest
Architecture Digest
Jun 4, 2025 · Information Security

Toxic Agent Flow: Exploiting GitHub MCP to Leak Private Repositories via Prompt Injection

A newly disclosed vulnerability in GitHub's Model‑Centric Programming (MCP) enables attackers to hijack AI agents through crafted GitHub Issues, injecting malicious prompts that cause the assistant to retrieve and expose private repository data, while the article also outlines mitigation strategies and defensive code examples.

AI securityAgent DefenseGitHub
0 likes · 7 min read
Toxic Agent Flow: Exploiting GitHub MCP to Leak Private Repositories via Prompt Injection
Tencent Technical Engineering
Tencent Technical Engineering
Apr 10, 2025 · Information Security

AI-Generated Code Introduces XSS Vulnerabilities: A Case Study and Security Guidance

The Woodpecker team shows that AI‑generated code, exemplified by Simon Willison’s HTML slideshow tool, can embed unsanitized inputs that create exploitable XSS flaws, and they recommend zero‑trust AI prompts, rigorous input filtering, CSP, AI‑assisted scanning, and secure supply‑chain practices to mitigate such risks.

AI securityCSPSimon Willison
0 likes · 9 min read
AI-Generated Code Introduces XSS Vulnerabilities: A Case Study and Security Guidance
Tencent Technical Engineering
Tencent Technical Engineering
Mar 27, 2025 · Information Security

AI Programming Assistants Can Be Hijacked: Configuration File Poisoning and Security Risks

AI programming assistants such as GitHub Copilot and Cursor can be hijacked through poisoned configuration files that hide malicious prompts using invisible Unicode characters, exposing developers to risks like data leakage, DDoS, cryptomining and trojan injection, so they must avoid unknown configs, sandbox generated code, and employ static analysis and AI audits to mitigate threats.

AI securityConfiguration Filescode poisoning
0 likes · 12 min read
AI Programming Assistants Can Be Hijacked: Configuration File Poisoning and Security Risks
Tencent Technical Engineering
Tencent Technical Engineering
Mar 19, 2025 · Information Security

AI Programming Security Risks and Countermeasures

As AI tools soon generate the majority of software, they dramatically amplify hidden security risks—such as hard‑coded secrets, XXE, directory traversal, and privilege escalation—requiring zero‑trust scanning, secret interception, command filtering, privilege‑fuse safeguards, and AI‑native semantic analysis to protect the modern code supply chain.

AI programmingAI securityRisk Mitigation
0 likes · 9 min read
AI Programming Security Risks and Countermeasures
AntTech
AntTech
Jan 6, 2025 · Artificial Intelligence

2024 Security and Trusted AI Research Highlights from Alibaba, Tsinghua, Zhejiang, and Partner Institutions

This article presents sixteen peer‑reviewed research papers published in top conferences and journals in 2024, covering trusted AI, large‑model applications, network security, adversarial training, deep‑fake detection, secure inference, and related topics from collaborations among Alibaba, Tsinghua, Zhejiang, and other leading institutions.

AI securityDeepfake DetectionLarge Models
0 likes · 27 min read
2024 Security and Trusted AI Research Highlights from Alibaba, Tsinghua, Zhejiang, and Partner Institutions
AntTech
AntTech
Dec 2, 2024 · Artificial Intelligence

Ant Group’s Morse & ARCLab Wins Both Attack and Defense Tracks in NeurIPS 2024 LLM Privacy Challenge

Ant Group’s Morse & ARCLab team secured the champion title in the attack track and the best practical defense award in the LLM Privacy Challenge at NeurIPS 2024, showcasing cutting‑edge methods for extracting training data from large language models and protecting model privacy with data sanitization and differential privacy techniques.

AI securityLLM privacyNeurIPS
0 likes · 5 min read
Ant Group’s Morse & ARCLab Wins Both Attack and Defense Tracks in NeurIPS 2024 LLM Privacy Challenge
AntTech
AntTech
Nov 13, 2024 · Information Security

Ant Group’s Large‑Model‑Based Security Parallel Plane and Intelligent Threat Detection System

The article details Ant Group’s AI‑driven security parallel plane and intelligent threat detection system, its DKCF‑based architecture, key modules for data correlation, unknown threat discovery, alarm reduction, and knowledge‑graph integration, and its recognition in the 2024 AI Pioneer Case Collection.

AI securityAnt GroupDKCF
0 likes · 5 min read
Ant Group’s Large‑Model‑Based Security Parallel Plane and Intelligent Threat Detection System
AntTech
AntTech
Nov 12, 2024 · Artificial Intelligence

Rhombus: Fast Homomorphic Matrix‑Vector Multiplication for Secure Two‑Party Inference – Paper Overview and Live Presentation

The article introduces the Rhombus protocol, a fast homomorphic matrix‑vector multiplication scheme that reduces ciphertext rotations and achieves O(1) communication complexity, enabling efficient privacy‑preserving two‑party inference, and announces a live streaming session where the first author will discuss its technical details and experimental results.

AI securityRhombus protocolhomomorphic encryption
0 likes · 3 min read
Rhombus: Fast Homomorphic Matrix‑Vector Multiplication for Secure Two‑Party Inference – Paper Overview and Live Presentation
AntTech
AntTech
Sep 14, 2024 · Artificial Intelligence

WDTA Releases First International Standard for Large‑Model Supply‑Chain Security

At the 2024 Inclusion·Bund Conference, the World Digital Technology Academy (WDTA) unveiled the first international standard for large‑model supply‑chain security, a collaborative effort by CSA Greater China, Ant Group, Microsoft, Google, Meta, PrivateAI and others, marking a significant step in global AI governance and trust.

AI securityInternational StandardsLarge Models
0 likes · 7 min read
WDTA Releases First International Standard for Large‑Model Supply‑Chain Security
AntTech
AntTech
Jul 25, 2024 · Information Security

Security Analysis of Code Execution Sandboxes in AI Applications

This report investigates the security of code‑execution sandboxes used by various AI applications, evaluates their isolation mechanisms, presents detailed test results for multiple platforms, and offers recommendations for selecting and hardening sandbox solutions in the era of large language models.

AI securityDenoFirecracker
0 likes · 23 min read
Security Analysis of Code Execution Sandboxes in AI Applications
AntTech
AntTech
Jul 23, 2024 · Artificial Intelligence

Ant Group’s 11 Papers Accepted at ICML 2024 Cover AI Efficiency, Security, Multimodal Learning, and More

At ICML 2024 in Vienna, Ant Group had eleven papers accepted, spanning topics such as quantization-aware secure inference for transformers, multimodal contrastive captioners, self-cognitive denoising with noisy labels, directed graph embedding, GAN improvement via score matching, and trustworthy alignment of retrieval-augmented large language models.

AI securityAnt GroupICML2024
0 likes · 18 min read
Ant Group’s 11 Papers Accepted at ICML 2024 Cover AI Efficiency, Security, Multimodal Learning, and More
AntTech
AntTech
Nov 10, 2023 · Artificial Intelligence

Ant Group and Tsinghua University’s Distributed Collaborative Risk‑Defense System Wins Zhejiang Provincial Science & Technology Progress Award

The award‑winning distributed collaborative risk‑defense system, developed by Ant Group, Tsinghua University and Alipay, leverages AI, privacy‑preserving computing and graph analytics to achieve real‑time, high‑efficiency detection and invisible, precise control of hidden risks in massive digital transactions, earning top provincial honors and extensive industry adoption.

AI securityAwardBig Data
0 likes · 5 min read
Ant Group and Tsinghua University’s Distributed Collaborative Risk‑Defense System Wins Zhejiang Provincial Science & Technology Progress Award
Efficient Ops
Efficient Ops
Sep 26, 2023 · Cloud Native

Unlocking Digital Banking: Cloud‑Native Architecture Behind Bank of China's Open Banking Success

The 2023 China International Service Trade Fair showcased a digital transformation case where Bank of China’s Open Banking platform, built on cloud‑native micro‑services, unified governance, robust security, and AI integration, demonstrated extensive industry impact, extensive partner ecosystems, and award‑winning innovation.

AI securityDigital TransformationFinTech
0 likes · 10 min read
Unlocking Digital Banking: Cloud‑Native Architecture Behind Bank of China's Open Banking Success
Tencent Tech
Tencent Tech
Jun 6, 2023 · Information Security

How Secure Is WeChat’s Palm‑Print Payment? Inside the AI‑Powered Safeguards

WeChat’s palm‑payment combines unique palm‑texture and vein data with advanced imaging and AI algorithms to verify identity, offering large‑scale deployment, seamless user experience, and strong anti‑spoofing measures while addressing real‑world challenges like dirty or injured hands.

AI securityImage Processingbiometric authentication
0 likes · 7 min read
How Secure Is WeChat’s Palm‑Print Payment? Inside the AI‑Powered Safeguards
IT Services Circle
IT Services Circle
Feb 24, 2023 · Information Security

The Dark Side of ChatGPT: Scams, Prompt Injection, and Security Risks

The article examines how the rapid popularity of ChatGPT has spurred both legitimate opportunities and a surge in illicit activities, including account resale, scam scripts generated via prompt injection, and the creation of malware, highlighting the need for stricter regulation and security awareness.

AI misuseAI securityChatGPT
0 likes · 6 min read
The Dark Side of ChatGPT: Scams, Prompt Injection, and Security Risks
DataFunSummit
DataFunSummit
Nov 13, 2022 · Blockchain

A Blockchain‑Based Trusted Federated Learning Architecture: Overview, Progress, and Future Directions

This article presents a comprehensive overview of blockchain‑enabled trusted federated learning, covering privacy computing fundamentals, legal standards, technical classifications, real‑world use cases, the CMFL decentralized framework with committee consensus, experimental results, and future research opportunities.

AI securityBlockchainDecentralized Architecture
0 likes · 19 min read
A Blockchain‑Based Trusted Federated Learning Architecture: Overview, Progress, and Future Directions
DataFunSummit
DataFunSummit
Jan 26, 2022 · Artificial Intelligence

Applying Graph Neural Networks for Early Fraud Warning and Malicious URL Detection

This article explains how Tencent's security data lab uses graph neural networks to build heterogeneous temporal graphs for early warning of water‑room fraud cards and to detect malicious URLs, detailing the data modeling, graph construction, attention‑based aggregation, model training, and evaluation results.

AI securityGraph Neural NetworksMalicious URL Detection
0 likes · 8 min read
Applying Graph Neural Networks for Early Fraud Warning and Malicious URL Detection
AntTech
AntTech
Dec 16, 2021 · Information Security

CNCC2021 Technical Forum – Security Challenges in Digital Transformation

CNCC2021’s technical forum, held on December 17, 2021, gathered leading academics and industry experts to discuss privacy computing, secure multiparty computation, AI-driven cybersecurity, single sign‑on privacy, Yao’s garbled circuits, and blockchain smart‑contract security, highlighting emerging risks and solutions for digital transformation.

AI securityDigital TransformationSecure Multiparty Computation
0 likes · 11 min read
CNCC2021 Technical Forum – Security Challenges in Digital Transformation
DataFunTalk
DataFunTalk
Oct 24, 2021 · Artificial Intelligence

Privacy Computing: The Federated Three‑Stage FIRM Architecture and Its Industrial Applications

This article introduces the background of privacy computing, explains the FIRM (Federated system Interconnection Reference Model) three‑stage architecture, details key technologies such as the Ionic Bond communication framework and HeteroDeepFM, and showcases real‑world applications in marketing, risk control, and government sectors.

AI securityData CollaborationFIRM architecture
0 likes · 15 min read
Privacy Computing: The Federated Three‑Stage FIRM Architecture and Its Industrial Applications
DataFunSummit
DataFunSummit
Oct 23, 2021 · Artificial Intelligence

Privacy Computing: The Federated Learning Three‑Part FIRM Architecture and Its Industrial Applications

This article introduces the background of privacy computing, explains the three‑stage FIRM reference architecture for federated learning, describes key technologies such as the Ionic Bond communication framework and HeteroDeepFM, and showcases real‑world applications in marketing, risk control, and government sectors.

AI securityData CollaborationFIRM architecture
0 likes · 17 min read
Privacy Computing: The Federated Learning Three‑Part FIRM Architecture and Its Industrial Applications