How AI Is Redrawing the Cybersecurity Industry Map and What Leaders Must Do Now

By Staff Writer | Published: May 6, 2026 | Category: Risk Management

Anthropic's Project Glasswing is not just a product launch  it is a stress test for the entire cybersecurity industry, and the way business leaders respond to it will define competitive landscapes for the next decade.

When Anthropic announced Project Glasswing in April 2026, partnering with CrowdStrike, Palo Alto Networks, Microsoft, Google, and a roster of other technology and financial heavyweights, the reaction in financial markets was immediate and telling. Cybersecurity stocks, many of which had already absorbed a bruising selloff the previous February, climbed cautiously on the news. Investors took some comfort in the fact that established security vendors were inside the tent rather than being shown the door. But the deeper anxiety did not dissipate. It merely changed shape.

James Rundle's reporting in the Wall Street Journal captures a pivotal inflection point: AI has matured to the point where it can, in Anthropic's own words, “surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” That is not a marketing claim to be absorbed quietly. It is a structural challenge to an industry whose core value proposition has long rested on specialized human expertise, proprietary detection logic, and years of accumulated threat intelligence. The question business leaders must now answer is not whether AI will change cybersecurity—that question has already been settled—but which businesses are built to absorb that change and which ones will be consumed by it.

The Scale of the Disruption Signal

The February 2026 selloff that Rundle describes deserves careful examination, not because market reactions are always rational, but because this particular reaction illuminated something real. When Anthropic unveiled new code-generation and security capabilities in its Claude models on February 24, shares of CrowdStrike, Datadog, and Zscaler fell between 10 and 11 percent within days. Okta, SentinelOne, and Fortinet saw more modest declines. As TD Cowen managing director Shaul Eyal put it, “There was no mercy here. Everybody, for the most part, got impacted.”

The market's indiscriminate reaction was, paradoxically, both an overreaction and a signal worth heeding. It was an overreaction because it treated all cybersecurity companies as interchangeable, failing to distinguish between platform businesses with deep institutional integration and narrower point-solution providers whose value rests on capabilities AI can now replicate. It was a legitimate signal because it reflected a dawning consensus among sophisticated investors that the cost trajectory for certain security functions is about to collapse.

This dynamic—AI compressing the cost of tasks that once required armies of human analysts—has been playing out across the technology sector for years. What makes cybersecurity different is the speed and specificity of the threat. Vulnerability discovery, bug hunting, security operations center work, and code scanning are not peripheral activities in the security stack. They are, for many vendors, the core of the value proposition.

Where the Moats Hold—and Where They Don’t

Netskope CEO Sanjay Beri offered perhaps the clearest framework for thinking through AI's uneven impact on the industry. Companies focused on infrastructure will benefit from machine-speed operations. Firms operating exclusively in areas like code scanning will struggle as those capabilities become commoditized. This is a crucial distinction, and it maps closely to what strategic management scholars call “structural moats”—the durable competitive advantages that protect businesses from new entrants and technological substitution.

Established platform vendors like CrowdStrike and Palo Alto Networks possess several such moats. First, they hold massive proprietary datasets—years of threat telemetry, attack signatures, and behavioral baselines—that no AI model, however capable, can instantly replicate or replace. CrowdStrike's fourth-quarter revenue of $1.31 billion, up 23 percent year-over-year, reflects the stickiness of these platforms. Enterprises have integrated these tools deeply into their security architecture, their compliance frameworks, and their incident response workflows. Tearing them out in favor of an AI-native alternative is not a technology decision; it is an organizational, legal, and operational undertaking of significant complexity.

Second, and perhaps more importantly, enterprise cybersecurity purchasing decisions are not made on pure capability grounds. Trust, regulatory compliance, liability allocation, and vendor accountability all factor into the calculation. When a breach occurs—and in the current threat environment, breaches are a matter of when, not if—a chief information security officer needs to be able to point to an established vendor relationship with defined service-level agreements and legal accountability. An AI model, however capable, does not yet provide that institutional anchor.

This point is supported by research from Gartner, which has consistently found that cybersecurity platform consolidation, rather than best-of-breed point solutions, is the dominant procurement trend among enterprise buyers. According to Gartner's 2024 CISO survey, 75 percent of security leaders were actively pursuing vendor consolidation strategies, up from 29 percent in 2020. That consolidation bias favors established platform vendors and creates real switching costs that buffer them from disruption, at least in the near term.

The Vulnerability in the Middle Market

The picture looks considerably more precarious for mid-tier and specialized security vendors. Companies whose primary differentiation rests on automated vulnerability scanning, code analysis, or repetitive security operations center functions face a genuine existential question. Anthropic's claim that its Claude Mythos2 Preview model has already found thousands of high-severity vulnerabilities across every major operating system and web browser is not a distant threat—it is a present-tense capability statement.

The venture capital data cited in Rundle's piece adds another layer of texture. While total VC investment in cybersecurity rose to $16.5 billion in 2025 from $13.6 billion the previous year, deal counts fell to 868—the lowest since 2017. Investors are writing larger checks to fewer companies, concentrating capital in AI-native security platforms and later-stage businesses with demonstrated revenue scale. Smaller point-solution vendors that cannot articulate a credible AI integration strategy are finding it harder to raise capital on favorable terms, regardless of their underlying technology quality.

This selective capital deployment is a leading indicator of industry consolidation. When the funding environment narrows, smaller vendors face a stark choice: find a strategic acquirer, pivot their value proposition, or face an increasingly difficult path to growth. For security leaders at mid-sized companies evaluating their vendor relationships, this consolidation pressure is directly relevant. A vendor's ability to survive the next three years should factor into procurement decisions today.

The Dual-Use Problem No Partnership Fully Solves

Project Glasswing is framed as a defensive initiative—Anthropic providing access to Claude Mythos2 specifically for defensive cybersecurity work. The participant list, which includes major tech companies, financial institutions, and infrastructure providers, lends it institutional credibility. But the underlying capability that makes the model useful for defense is inseparable from the capability that makes it useful for offense.

This dual-use tension is not unique to AI; it has characterized cybersecurity tools since the earliest days of penetration testing. What is different is the scale and accessibility. A vulnerability discovery capability that previously required a team of elite human researchers can now, in principle, be accessed through an API. The barriers to entry for sophisticated cyberattacks are falling, even as the tools for defense become more powerful.

Research published by MIT Sloan Management Review in 2024 examined AI's dual-use implications across several technology sectors and concluded that industries characterized by asymmetric information and high stakes—finance, healthcare, and cybersecurity among them—face the greatest governance challenges as AI capabilities commoditize. The study found that regulatory frameworks consistently lag behind capability development by an average of four to seven years, a gap that creates substantial risk for organizations that assume compliance equals security.

For business leaders, this means that participation in initiatives like Project Glasswing, while potentially advantageous for capability access, does not resolve the deeper governance question of how AI-generated vulnerability intelligence is managed, shared, and protected from misuse. The compliance and risk management implications of integrating frontier AI models into security operations are still being worked out in real time, and boards should be pushing their security leaders to articulate clear policies on this before those policies are imposed by regulators.

Identity and Access Management: The Next Pressure Point

Rundle's article flags identity and access management as a sector poised for disruption despite having been a recent investor darling. The logic is straightforward: as autonomous AI agents proliferate across enterprise networks, the traditional paradigms of identity verification—built around human users, defined roles, and relatively predictable access patterns—are increasingly inadequate.

The explosion of agentic AI—AI systems that autonomously initiate actions, call APIs, and interact with other systems without direct human instruction at each step—creates an identity management challenge of a fundamentally different character. Verifying the identity and authority of a bot that spawns sub-agents, which in turn interact with external services, requires identity infrastructure that most enterprises simply do not have today. Companies like Okta and SailPoint have recognized this and are actively repositioning their platforms to address machine identity at scale, but the transition is neither cheap nor fast.

This is an area where the gap between current enterprise security posture and the security requirements of an agentic AI environment is most acute. A 2025 report from the Identity Defined Security Alliance found that 84 percent of organizations had experienced an identity-related breach in the previous year, and that number was collected before the widespread deployment of autonomous AI agents across enterprise workflows. The implication is that identity security investment needs to accelerate precisely as the technology is being most aggressively disrupted.

What Business Leaders Should Do Now

Several practical imperatives emerge from a clear-eyed assessment of where the cybersecurity industry stands.

The fundamentals of established platform vendors remain strong. CrowdStrike's 23 percent revenue growth, Palo Alto Networks' continued expansion, and sustained VC investment in AI-native security companies all point to an industry in transformation, not one in terminal decline.

The Judgment Machines Cannot Yet Replace

There is a final point worth making, one that tends to get lost in discussions about AI capability. Cohesity CEO Sanjay Poonen's caution that one cannot be cavalier about disruption is well taken. But TD Cowen's Eyal makes an equally important point when he asks, rhetorically, which CIO would tear out a CrowdStrike or Palo Alto Networks platform in favor of AI-generated security code.

The answer, for now, is none—and not merely because of switching costs. Enterprise security decisions involve human judgment about risk tolerance, regulatory exposure, organizational culture, and strategic priorities that AI models are not yet positioned to exercise. The judgment about what to protect, how much risk is acceptable, and how to balance security investment against business agility remains a fundamentally human responsibility.

What AI changes is the cost and speed of executing against security strategies, not the wisdom required to formulate them. The cybersecurity firms that thrive in the coming decade will be those that understand this distinction clearly—augmenting human strategic judgment with machine execution capability, rather than allowing either to operate in isolation. For business leaders watching this industry reshuffle, that distinction is the most important one to hold onto.