Observations from ISSTA 2024: Conference Highlights, Awarded Papers, Keynotes, and In‑Depth Reviews
The article reports on the 33rd ISSTA 2024 conference in Vienna, summarizing its acceptance statistics, highlighting the Impact Paper Award and Distinguished Papers, detailing keynotes on large‑language‑model‑driven software quality, and providing extensive reviews of selected research works ranging from fuzzing and program repair to database query simplification and AI‑oriented code generation.
The 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) was held in Vienna from September 18–20, 2024, receiving a CCF‑A rating and accepting 143 out of 694 submissions (20.61% acceptance). Four papers from Ant Group were presented, and the authors share observations and analyses of the conference.
ISSTA’s Impact Paper Award went to the Defects4J dataset, a widely used Java fault collection, while the ACM SIGSOFT Distinguished Paper Award recognized ten outstanding papers covering topics such as AI‑driven code generation, reinforcement‑learning‑based fuzzing, and multi‑granularity patch generation.
The keynote by Lingming Zhang from UIUC explored software quality assurance in the era of large language models, discussing LLM‑based testing, verification, and repair, as well as the concept of AI‑oriented programming language grammars that reduce token usage for models like CodeLlama and GPT‑4.
Among the team’s own contributions, the SQLess paper introduced a dialect‑agnostic SQL query simplification technique that achieves a 72.45% average simplification rate on the PINOLO dataset, outperforming prior methods. Other highlighted works include AsFuzzer (assembler grammar inference), AI Coders (AI‑oriented grammar), CodeFast (LLM inference acceleration), MetaMut (LLM‑based mutator), CovRL‑Fuzz (coverage‑guided LLM mutator), WASMaker and Wapplique (WebAssembly fuzzing), AutoCodeRover and SpecRover (autonomous program repair), ThinkRepair (chain‑of‑thought‑guided repair), Calico (knowledge calibration for code tasks), and several studies on static call‑graph soundness and API misuse detection.
The article also discusses practical challenges such as maintaining context in fuzzing, reducing computational waste in LLM code generation, and improving API misuse detection through probabilistic graphical models. It concludes with a brief introduction of the Ant Group Program Analysis team and a call for interested candidates to apply.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.