AI Programming Assistants Can Be Hijacked: Configuration File Poisoning and Security Risks
AI programming assistants such as GitHub Copilot and Cursor can be hijacked through poisoned configuration files that hide malicious prompts using invisible Unicode characters, exposing developers to risks like data leakage, DDoS, cryptomining and trojan injection, so they must avoid unknown configs, sandbox generated code, and employ static analysis and AI audits to mitigate threats.