Anthropic has started to “let Claude off the leash” for vulnerability scanning—and the results are a forcing function for the security industry. In this episode, Tom Hollingsworth is joined by Fernando Montenegro and Alan Shimel to break down what it means when an AI system can surface 600+ vulnerabilities across open-source projects, including at least one Ghostscript issue discovered via a novel approach that security researchers hadn’t used before.
The conversation moves from the raw headline to the operational reality: if AI accelerates discovery, defenders inherit an urgent backlog. The panel discusses why you won’t see a full list of CVEs immediately (responsible disclosure still matters), how prioritization changes when findings scale into the hundreds, and why organizations may struggle to keep pace if their change management and patch pipelines are built for a slower era.
The bottom line: AI-assisted research can dramatically increase visibility into latent risk—but the security and engineering machinery behind remediation has to evolve just as fast.
If AI can surface hundreds of vulnerabilities in weeks, what should security teams change first—prioritization, disclosure workflow, or remediation automation? Subscribe to Security Boulevard for weekly analysis on what’s changing in security and what actually matters in practice.