Artificial Intelligence 5 min read

Can AI Auditors Ensure Reliable Software? Highlights from EXPRESS 2025 at ISSTA

The EXPRESS 2025 workshop at ISSTA in Norway will showcase AI‑driven code auditing, present cutting‑edge research on trustworthy software systems, and invite researchers and practitioners to discuss transparency, reliability, and security challenges in modern software engineering.

AntTech
AntTech
AntTech
Can AI Auditors Ensure Reliable Software? Highlights from EXPRESS 2025 at ISSTA

Express 2025 Workshop Overview

From June 25‑28, 2025, the International Symposium on Software Testing and Analysis (ISSTA) will be held in Trondheim, Norway. Ant Group sponsors the inaugural EXPRESS 2025 workshop, “Explainable and Reliable Software Systems,” scheduled for June 28, 9:00‑12:30.

Goal

The workshop aims to address the growing demand for transparency, reliability, and trustworthiness in modern software systems by gathering international researchers, practitioners, and developers to explore innovative approaches.

Keynote: Human‑like AI Auditor for Code Repositories

Abstract: Large language models (LLMs) show promise for automated code analysis but suffer from context limits and hallucinations. The talk introduces RepoAudit , an LLM‑driven autonomous agent that performs efficient, accurate code‑base audits using abstraction, pointer tracking, and verification mechanisms for demand‑driven, path‑sensitive reasoning. In a controlled experiment, RepoAudit detected 38 real bugs with 65 % accuracy, outperforming Meta INFER and Amazon CodeGuru, at a cost of only $2.54 per audit. A broader field study on high‑profile GitHub repositories, including the Linux kernel, uncovered 300 zero‑day vulnerabilities ranging from classic null‑pointer issues to functional bugs, marking a significant step toward LLM‑based IDE‑time auditing.

Speaker: Xiangyu Zhang, Professor, Purdue University (focus on AI safety, software analysis, and cyber forensics).

Selected Research Reports

FuseApplyBench: Multilingual Benchmark for Trustworthy Code Edit Applying Task – Ming Liang et al.

Patch the Leak: Strengthening CodeLLMs Against Privacy Extraction Threats – Yongjian Guo et al.

From Large Language Models to Adversarial Malware: How far are we – Shuai He et al.

Towards Source Mapping for Zero‑Knowledge Smart Contracts: Design and Preliminary Evaluation – Pei Xu et al.

TestFlow: Advancing Mobile UI Testing through Multi‑Step Reinforcement Learning – Xiaoxuan Tang et al.

Acknowledgments and Invitation

We thank all authors and program committee members and look forward to an inspiring EXPRESS 2025 experience. Attendees are encouraged to engage in discussions and social activities that foster lasting community relationships, addressing key challenges at the intersection of software engineering and trustworthy AI. See you in Norway on June 28!

LLMsoftware reliabilitycode analysisAI auditingexpress workshopISSTA 2025
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.