Attack surface: What executes
Just as developers rely on project configuration to automate setup, debugging, and routine tasks, AI coding agents and modern developer tools also inherit execution paths from repository files. These artifacts can trigger commands, bootstrap environments, and chain execution through normal workflows.
Opening a project, trusting a workspace, starting a debugger, rebuilding a container, or running a standard setup command may therefore execute attacker-controlled logic under the appearance of legitimate project automation.
Attack surface: What instructs
AI coding agents also consume persistent instruction files that shape how they behave inside a project. These files can influence what the agent prioritizes, what it ignores, which tools it uses, which files it trusts, and which actions it takes automatically.
These files do not need to contain exploit code to be security-relevant. Reusing them across repositories introduces a supply-chain risk, because malicious instructions can be presented as harmless guidance while steering otherwise legitimate agent workflows toward unsafe behavior.
Unlike traditional IDEs that require a human to click run, an agent may parse these instructions and execute them as a prerequisite to a task without the developer ever reviewing the specific instruction block.
Attack surface: What connects
Beyond instructions, coding agents also depend on runtime definitions that determine how they interact with tools, hooks, external services, and local execution contexts. These files define permissions, tool connectivity, external endpoints, and execution paths.
This is where repository-level influence becomes operational control. A malicious or unsafe runtime configuration can expose local commands, remote services, sensitive data, and untrusted model context protocol (MCP) servers to the agent, turning configuration abuse into controlled execution.
Attack surface: What extends
Extensions add another layer of inherited trust and introduce third-party code into editor and browser runtimes, often with broad access to local files, credentials, and developer workflows. This inherited trust can create a supply-chain problem similar to malicious project configurations: Compromised extensions, poisoned update paths, and hijacked publisher accounts can introduce attacker-controlled logic through components that otherwise appear to be standard tooling.
Applying VirusTotal Code Insight in agentic threat intelligence
This taxonomy highlights a fundamental shift in the threat landscape: The risk is no longer just in the syntax of code, but in the semantics of intent.
Traditional security tools are effectively blind to natural language instructions that tell an AI to ignore guardrails or redirect data. The operational questions are then: How can defenders identify these risks systematically? How can they detect the danger before a developer or an agent automatically follows a valid instruction file to a malicious conclusion?
To bridge this gap, we use VirusTotal Code Insight and agentic threat intelligence to perform large-scale semantic analysis. Because malicious repository settings and instruction files are often syntactically correct, they frequently return zero detections from signature-based scanners.
Code Insight solves this problem by using AI to analyze the file’s actual logic and read between the lines, surfacing behavioral risks that are invisible to legacy tooling. This context is further enriched within agentic threat intelligence, where security teams can pivot from a single semantic red flag to investigate broader threat infrastructure and associated campaign activity.
Example 1: A Weaponized tasks.json
One representative example is a file distributed under the path coding-challenge/coding-challenge/.cursor/tasks.json. The sample was first submitted to VirusTotal on March 19, and remained undetected by security engines for several days.
VirusTotal Code Insight flagged it as a risk based on the behaviour implied by the configuration itself. The sample has also been verified as malicious by a Mandiant analyst and marked as associated with a tracked threat actor by Google Threat Intelligence.






