Why CIOs Must Play Defense Before AI Offense A Liberty Mutual Case Study

By Staff Writer | Published: March 5, 2026 | Category: Leadership

Liberty Mutual's global CIO argues that organizations haven't earned the right to deploy AI as first movers until they establish secure, stable systems with accessible data infrastructure.

AI Hype vs. Reality: Why “Defense Before Offense” Wins

The enthusiasm surrounding artificial intelligence has created a dangerous narrative: that organizations must move fast or be left behind. Monica Caldas, global CIO at Liberty Mutual Insurance and recipient of the 2025 MIT Sloan CIO Leadership Award, offers a more nuanced perspective that challenges this conventional wisdom.

In her recent interview with MIT Sloan Management Review, Caldas presents a framework that should resonate with technology leaders across regulated industries: you must master defense before playing offense. This philosophy isn’t about moving slowly. Rather, it recognizes that sustainable AI adoption requires foundational capabilities that many organizations have not yet built.

The Defense and Offense Framework for Technology Leadership

Caldas describes the CIO role through two simultaneous lenses: defense and offense. Defense involves protecting data, ensuring system security, and maintaining stability so the company can operate. Offense means building new features and functionality that drive competitive advantage. While this duality isn’t new, the AI revolution has amplified the tension between these responsibilities.

“That framework is evergreen, regardless of the time and industry you’re leading in. Today, if you want to take advantage of the next generation of data and AI capabilities, you have to approach offense and defense with the same intensity.”

This balanced approach contradicts the move-fast-and-break-things ethos that has dominated technology leadership discourse for the past two decades. Research from Gartner’s 2024 CIO Survey supports Caldas’s position, finding that 63% of CIOs report AI governance frameworks as critical for successful deployment. Organizations that rush AI implementation without adequate governance face heightened risks of data breaches, compliance violations, and reputational damage.

For Liberty Mutual, operating in the heavily regulated insurance industry means these risks carry existential weight. But Caldas’s argument extends beyond regulatory compliance. She contends that organizations with fragmented data architectures and legacy systems simply cannot extract meaningful value from generative AI, regardless of their industry.

The Infrastructure Prerequisite: Why AI Needs Modern Architecture

One of Caldas’s most striking observations concerns the relationship between AI capabilities and underlying infrastructure.

“If you have structured and unstructured data that is isolated in a variety of systems and you can’t quickly access it, you’re going to hit a limit of what you can do with advanced generative AI capabilities.”

This infrastructure-first perspective finds support in McKinsey’s research, which demonstrates that organizations with strong data foundations are three times more likely to realize significant AI benefits compared to those with fragmented data ecosystems. Yet many organizations pursue AI initiatives while their data remains siloed across incompatible systems, inaccessible to the algorithms that promise to transform their operations.

Caldas shares a particularly memorable caution about using generative AI for code modernization. When organizations attempt to convert legacy COBOL code to Java using AI without reimagining the underlying architecture, they create what she calls “Jobol”—code that technically runs in a modern language but lacks modern architectural capabilities.

“GenAI is not a magic wand where you press a button and new code comes out. Yeah, code comes out, but it’s not ready for production. You’ve got to add a lot of nonfunctional requirements, like security protocols.”

This observation highlights a critical misunderstanding about AI-assisted development. While generative AI can accelerate certain aspects of software creation, it cannot substitute for architectural thinking, security design, or the deep domain knowledge required to build production-grade systems. Organizations that treat AI as a shortcut rather than an accelerator will find themselves with technically functional but strategically inadequate solutions.

Responsible AI Governance: Committees That Actually Work

When Liberty Mutual began its generative AI journey, the first step wasn’t deploying tools or running pilots. Instead, the organization launched a responsible AI steering committee to orient itself to risks and establish navigation principles. This governance-first approach might seem bureaucratic, but it enabled Liberty Mutual to move faster once deployment began.

The committee established experimentation frameworks that help employees develop intuition about AI usage while managing associated risks. Critically, Liberty Mutual requires mandatory training before employees can access generative AI tools. This training covers hallucinations, appropriate use cases, and organizational expectations.

Critics of governance-heavy approaches often argue that committees slow innovation and that organizations should experiment freely and deal with problems as they emerge. However, this perspective ignores the compounding costs of rushed implementation in regulated industries. A single data privacy incident or algorithmic bias lawsuit can eliminate years of productivity gains.

Moreover, well-designed governance doesn’t necessarily slow deployment. By establishing clear guardrails upfront, Liberty Mutual’s approach enables employees to move confidently within defined boundaries rather than seeking approval for each decision. The training investment pays dividends through reduced risk exposure and more sophisticated AI utilization across the workforce.

Redefining Productivity in the AI Era

Caldas challenges another prevailing assumption about AI value: that productivity primarily means doing more work in less time.

“It’s not just about doing more. Productivity is multidimensional. It’s about quality, time to make decisions, and serving customers with better products.”

This multidimensional productivity framework has important implications for how organizations measure AI success. Simple metrics like lines of code written or customer service tickets resolved may miss the more substantial benefits of improved decision quality, reduced error rates, or enhanced customer experiences.

Liberty Mutual’s deployment of an internal AI agent called Libby illustrates this broader conception of productivity. Attached to the company’s knowledge database with instrumentation on environmental issues, Libby predicts where employees might encounter problems and automates previously manual workflows. The measurable outcome isn’t just faster ticket resolution but the redeployment of help desk employees to higher-value work that had been languishing in the backlog.

In software development, Liberty Mutual identified 35% of the development lifecycle where generative AI could support engineers. However, implementation revealed important nuances: senior engineers experienced dramatic productivity improvements, while junior engineers required more mentoring to capitalize on AI capabilities.

This experience level differential matters for workforce planning and talent development. Organizations cannot simply assume that AI tools will level the playing field between experienced and novice employees. Instead, they may need to reimagine training programs, mentorship structures, and career progression pathways for an AI-augmented workplace.

The Modernization Journey: Beyond Lift and Shift

Caldas describes Liberty Mutual’s technology modernization as involving multiple strategies: cleaning up and retiring systems, lifting and shifting some applications, and genuinely transforming others. This nuanced approach contradicts the simplistic modernization narratives often promoted by cloud vendors and consulting firms.

“Some of it is cleaning up and retiring things. Some of it is actual transformation.”

Organizations must resist the temptation to modernize everything uniformly. Some legacy systems may deserve retirement rather than renovation. Others may function adequately with minimal modification. Still others require fundamental reimagining to support future capabilities.

The challenge lies in making these distinctions strategically rather than opportunistically. Too often, organizations modernize based on vendor relationships, budget availability, or executive pet projects rather than systematic assessment of business value and technical dependencies.

Liberty Mutual’s experience suggests that generative AI can play a role in modernization, but only when coupled with architectural thinking and clear transformation objectives. AI-assisted code conversion without strategic intent produces technical debt in modern syntax rather than genuine capability advancement.

Counterarguments and Alternative Perspectives

While Caldas’s framework offers valuable guidance, it’s worth examining alternative perspectives. Some technology leaders argue that excessive emphasis on governance and infrastructure readiness creates paralysis. They contend that organizations learn more from deploying imperfect AI solutions and iterating than from extended planning and preparation.

This rapid experimentation approach has merit, particularly for organizations in less regulated industries or those pursuing AI applications with limited downside risk. A retail company testing AI-generated product descriptions faces different risk profiles than an insurance company using AI to make coverage decisions.

Additionally, the defense-before-offense framework may underestimate the motivational power of visible AI wins. Organizations struggling to secure funding for infrastructure modernization might find that successful AI pilots generate the executive support and budget necessary for deeper transformation. In this view, small offensive moves can create momentum for defensive improvements.

There’s also the question of competitive dynamics. While Liberty Mutual deliberates and governs, might more aggressive competitors capture market share through faster AI deployment? Caldas would likely argue that unsustainable advantages built on inadequate foundations will eventually collapse, but in the meantime, market position may shift.

Finally, some observers question whether the responsible AI committee model truly prevents problems or simply creates the appearance of diligence. Committee structures can become bureaucratic rituals that check compliance boxes without meaningfully improving outcomes. The effectiveness of Liberty Mutual’s governance depends on factors the interview doesn’t fully address: committee composition, decision-making authority, and accountability mechanisms.

Practical Implications for Technology Leaders

What should CIOs and technology leaders take from Caldas’s framework? Several principles emerge:

The “Earning Trust” Principle

The most powerful element of Caldas’s framework may be her assertion that organizations must earn the right to be AI first movers.

“In a heavily regulated industry like ours, if you don’t have secure, stable systems, you have not earned the right to do so.”

This earning-trust principle applies beyond regulated industries. Organizations owe their customers, employees, and stakeholders responsible stewardship of data and reliable system operation. AI deployment that compromises these foundational responsibilities betrays that trust, regardless of the productivity gains or competitive advantages promised.

The earning-trust principle also reframes the conversation about AI adoption speed. The relevant question isn’t whether your organization moves as fast as others but whether your pace appropriately balances innovation with responsibility given your specific circumstances. A financial services company and a social media startup face different trust equations and should move at different speeds.

Looking Forward: AI Maturity Models

Caldas’s framework implicitly suggests an AI maturity model where organizations progress through stages: establishing governance, modernizing infrastructure, deploying targeted use cases, and eventually achieving systematic AI integration. This staged approach differs from narratives that treat AI as a switch to flip or a single decision to make.

Organizations at different maturity stages require different leadership priorities. Early-stage organizations need governance frameworks and infrastructure assessment. Mid-stage organizations benefit from targeted pilots and workforce training. Advanced organizations focus on scaling successful use cases and measuring multidimensional productivity.

The challenge for technology leaders is accurately assessing their organization’s current stage and resisting the pressure to skip ahead. The promise of transformative AI capabilities creates temptation to pursue advanced use cases before establishing foundational capabilities. But as Caldas’s experience demonstrates, sustainable AI advantage requires patience and sequencing.

Conclusion: Simultaneous Intensity

Monica Caldas offers technology leaders a framework that acknowledges AI’s transformative potential while insisting on responsible implementation. Her defense-and-offense model doesn’t counsel either reckless speed or timid caution. Instead, it calls for simultaneous intensity on both fronts: building secure, stable, modern infrastructure while pursuing innovation that drives competitive advantage.

This balanced approach may seem obvious, yet organizational behavior suggests otherwise. Too many companies lurch between extremes, alternately pursuing innovation without adequate safeguards and imposing governance without supporting innovation. The discipline lies in maintaining focus on both priorities even when resource constraints, competitive pressure, and executive impatience push toward one or the other.

For regulated industries, Caldas’s framework provides clear guidance: governance and infrastructure first, then aggressive innovation. For less regulated sectors, the principles still apply though the specific implementation may differ. All organizations must wrestle with the fundamental question her framework raises: Have we earned the right to deploy this technology at this pace given our current capabilities and responsibilities?

As artificial intelligence continues evolving and new capabilities emerge, this question will require repeated asking and honest answering. The organizations that answer truthfully and act accordingly will build sustainable competitive advantages. Those that deceive themselves about their readiness will join the growing list of AI initiatives that promised transformation but delivered disappointment.

The path forward requires what Caldas demonstrates throughout her interview: clear-eyed assessment of current state, honest acknowledgment of gaps, systematic capability building, and disciplined deployment. It’s less dramatic than the AI revolution rhetoric that dominates conference stages and consulting presentations. But it’s more likely to deliver the lasting value that organizations and their stakeholders deserve.