The rapidly-evolving AI threat landscape
Threat actors have been experimenting with AI and are incorporating it into their operations, as John Hultquist discussed earlier this month. Adversaries are using AI to automate and enhance their operations, treating it like software development or knowledge work.
The most concerning developments are:
- AI-powered malware and automated intrusion activity: These scaled, automated, and dynamic attacks are much faster than human-involved attacks, and are harder to defend against.
- Targeting critical infrastructure and supply chains: While targeting health services, energy, grocery stores, and other essential services isn’t new for threat actors, AI is changing the scale and scope of their attacks.
- More aggressive attacks: These include ransomware, which is the easiest way for attackers to monetize vulnerabilities, making personal threats, and vishing.
- Vishing awareness: Attackers are using voice, text, and other channels besides email for delivering phishing messages, and becoming more creative at the same time.
Foundational risks to AI infrastructure
Fundamentally, the risk of losing control of AI infrastructure goes beyond launch processes and software development processes because it’s about more than just writing software. It’s about business processes that could lead to where you might lose control of how AI is being used in any one of those steps — and that makes it an issue of governance.
Google is working on controls to manage key risks to AI generally. These include evaluating:
- Loss of control risk: We strongly recommend implementing an overarching governance of launch, software development, and procedural business processes to prevent losing control of AI.
- Supply chain risk: We advocate for implementing tamper-proof provenance for risks associated with models, orchestration servers, tools called by agents, and third-party security, mirroring but expanding on traditional software supply chain best practices.
- Data risk: Data is the new perimeter. The data used to train models can be poisoned, manipulated, and used to plant a back door.
- Input and output risk: We also recommend treating prompts like code to better manage prompt manipulation risks. This is similar to traditional SQL injection risk management.
Google’s defense strategy and AI agents
We’ve had a lot to say about defense and AI, and how we’re using agents to boost the defender’s daily workflow. Agentic AI is transforming traditional security operations, as agents combine advanced AI models with security tools. They have started to identify, reason through, and take actions to accomplish goals on behalf of defenders.
These capabilities mark a fundamental shift, where agents work alongside security teams and give human analysts more time to focus on challenges that truly demand their expertise. Using agents in the security operations center (SOC) is a key goal of how we’re innovating with AI, and you’ll continue to see more related offerings throughout 2026.
Some areas where we can highlight that work so far include:
- Building semi-autonomous defense: The current focus is on a semi-autonomous SOC that goes faster but keeps humans (including analysts and forensics experts) in the loop, moving toward an eventual autonomous, self-defending state.
- Agentic workflows: These workflows use the same existing tools, teams, and processes but connect steps faster to strengthen analysts. Fully automated tasks include alert triaging and threat hunting.
- Interface and usability: The interface is similar to Gemini, allowing analysts to interrogate and engage with workflows using natural language.
- Prompt reuse: Analysts can save effective prompts for specific use cases and actions in the agentic SOC, and make them available to the rest of the team. This can also help with risk management, by narrowing in on use cases and mitigating prompt injection vulnerabilities.
- Ecosystem integration: The system strings together existing third-party tooling with first-party products (such as Google Security Operations and Google Threat Intelligence) to help teams to benefit from third-party tool upgrades without ripping out existing infrastructure.
- Protection: The ecosystem is protected by Identity and Access Management (IAM), Cloud Armor (acting as a firewall for models), and policies and logging to defend against AI risks like data poisoning and prompt injection.
Learn more about how Google does security
Over the past year, we’ve pulled back the curtain on how Google approaches critical security topics, including implementing AI red teams, finding and fixing software vulnerabilities, using threat intelligence to track down cybercriminals, modernizing threat modeling, and building security programs at a global scale.
To learn more, you can check out all of the new Security Talks presentations here.






