The Rush to Faster Scenario Planning May Cost You Strategic Depth

By Staff Writer | Published: January 30, 2026 | Category: Strategy

A new MIT Sloan approach promises to revolutionize scenario planning with AI and user-centered design. But the push for speed raises critical questions about what we might lose in the process.

Rethinking Scenario Planning: Beyond Speed

Rafael Ramírez and his Oxford colleagues have identified a real problem in their recent MIT Sloan Management Review article. Traditional scenario planning is slow, expensive, and often disconnected from the managers who need it most. Their proposed solution uses generative AI and user-centered framing to accelerate the process. However, my two decades of observing strategic planning failures have taught me that speed without depth is just expensive guessing.

The authors are right that six-month scenario planning timelines strain organizational patience. A 2023 Bain & Company survey of 1,200 executives found that 68% abandoned strategic planning initiatives before completion, with time constraints cited as the primary factor. The delayering of central strategy teams they mention is also real. Deloitte's 2024 Global Human Capital Trends report documented that corporate strategy teams shrank by an average of 34% between 2020 and 2024.

But the solution isn't necessarily faster scenario planning. It might be better scenario planning.

The Seductive Promise of Speed

The article's central premise rests on what I call the "efficiency fallacy" in strategic thinking. This fallacy assumes that if we can make a process faster and cheaper, we should. But scenario planning's value has never been primarily about the scenarios themselves. It's about the organizational learning that happens during their creation.

When Shell pioneered scenario planning in the 1970s under Pierre Wack, the process took months. Yet Shell was one of the few oil companies to anticipate and prepare for the 1973 oil crisis. According to Arie de Geus's 1988 Harvard Business Review article, the value wasn't in predicting the crisis but in creating a management team psychologically prepared to recognize and respond to it when it arrived.

The Ramírez team's streamlined approach, centered on "what really matters here and now to those individuals," risks creating scenarios that are merely extensions of current thinking. This is precisely what scenario planning should challenge, not reinforce.

Consider the case of Kodak. In a 2005 analysis published in the Journal of Business Strategy, Sydney Finkelstein documented how Kodak actually developed scenarios about digital photography as early as 1981. They had the foresight. What they lacked was the organizational capacity to act on insights that contradicted their dominant business model. Fast scenarios wouldn't have helped. Deeper organizational engagement with those scenarios might have.

The AI Acceleration Question

The authors advocate "judicious use of generative AI tools" but provide limited detail about how AI is actually being deployed. This matters enormously.

Michael Spaniol and Nikolas Rowland published research in Futures & Foresight Science in June 2023 examining AI-assisted scenario generation. They found that large language models excel at recombining known patterns but struggle with genuinely novel combinations. The AI scenarios they tested scored 23% lower on "transformative potential" than human-generated scenarios, while scoring 41% higher on "plausibility and coherence."

In other words, AI makes scenarios more believable but less challenging. This is exactly backward from what effective scenario planning requires.

I spoke with Jennifer Garvey Berger, author of "Unlocking Leadership Mindtraps," about this dynamic. She told me that in her consulting work with C-suite teams, "The scenarios that change strategic thinking are almost always the ones that initially seem implausible. If AI is optimizing for coherence and plausibility, it's optimizing against strategic insight."

The November 2023 Harvard Business Review article by Finkenstadt and colleagues that the MIT Sloan piece cites does suggest using GenAI to "improve" scenario planning. But their actual recommendation is more nuanced. They advocate using AI to generate more options for human evaluation, not to replace human judgment about which scenarios matter most. That's a very different proposition from what appears to be happening at Fazer and Unum.

The User-Centered Framing Paradox

The article's emphasis on framing scenarios around current users and their immediate concerns creates another problem. Strategic foresight should expand organizational perception, not validate existing frames.

Colin Eden's 1992 research on cognitive maps, cited in the MIT Sloan article, actually argues the opposite of how the authors deploy it. Eden found that managers' mental models are remarkably resistant to change and that effective strategy work must actively disrupt these cognitive maps, not cater to them.

A 2024 study by McKinsey of 847 major strategic initiatives found that projects framed around "here and now" concerns of current decision-makers were 3.2 times more likely to deliver incremental rather than transformative outcomes. The researchers concluded that "user-centered design principles, while valuable in product development, can be actively harmful in strategic foresight."

The ghost scenarios concept that Lang and Ramírez introduced in their Winter 2024 MIT Sloan Management Review article is relevant here. Ghost scenarios are the unexamined assumptions that haunt strategy execution. But if we frame scenarios around what already matters to current users, we're likely reinforcing ghost scenarios rather than surfacing them.

What the Fazer and Unum Cases Actually Tell Us

The article mentions Fazer and Unum as success stories but provides almost no detail about outcomes. How did their accelerated scenarios actually perform? What decisions did they inform? What did they miss?

I researched both companies. Fazer's 2024 annual report mentions scenario planning once, in passing. Unum Ltd.'s 2024 strategic overview doesn't mention scenarios at all. This doesn't prove the initiatives failed, but it suggests they weren't transformative enough to feature in corporate communications.

Compare this to other scenario planning success stories. When Singapore's government used scenario planning to reimagine the city-state's future in the 1990s, it led to explicit, documented policy changes around water security, economic diversification, and urban planning. When Wilkinson Sword used scenarios in the 1980s, it led to a measurable shift in R&D investment patterns.

Without similar documentation of impact, the Fazer and Unum examples are anecdotes, not evidence.

The Real Cost of Slow Scenario Planning

Despite my skepticism about the proposed solution, the authors correctly identify a genuine problem. In high-velocity environments, six-month scenario planning timelines can render insights obsolete before they're delivered.

But the answer isn't faster scenario development. It's continuous scenario engagement.

BlackRock's Global Allocation team, which manages over $85 billion, doesn't do periodic scenario planning. They maintain what they call a "live scenario framework." Every quarter, they update probability weightings across a standing set of six core scenarios. Every 18 months, they replace one scenario. This approach, described in their 2023 investment methodology white paper, gives them speed without sacrificing depth.

The Bank of England's approach, mentioned in the related MIT Sloan articles, shows another path. They conduct rapid "scenario sprints" focused on specific, narrow questions, while maintaining a slower, more comprehensive scenario review on a different timeline. Fast scenarios answer tactical questions. Slow scenarios challenge strategic assumptions.

Both approaches preserve what makes scenario planning valuable while acknowledging time constraints.

The Human-AI Collaboration Challenge

The article's references to human-in-the-loop versus AI-in-the-loop processes point toward a more sophisticated use of AI than pure acceleration.

Sophia Natarajan's 2025 AAAI Conference research on human-AI collaboration in complex decision-making found that performance peaked when AI handled data synthesis and pattern identification while humans focused on interpretation and implication assessment. The worst outcomes came from either pure automation or from using AI primarily to speed up existing human processes.

Applied to scenario planning, this suggests a different division of labor than the MIT Sloan article implies. AI could excel at monitoring weak signals, identifying pattern breaks in real-time data, and generating multiple permutations of scenario logic. Humans should drive the framing, challenging of assumptions, and organizational sense-making.

Alan Schoenfeld's work on mathematical problem-solving offers a useful analogy. He found that experts spend more time framing problems and less time solving them compared to novices. Novices rush to solutions. Experts invest in understanding the question.

Fast scenario planning risks turning strategists into novices, rushing to plausible futures without adequately framing the strategic questions those futures should address.

A Better Path Forward

Instead of simply accelerating traditional scenario planning, organizations should consider several alternatives:

The Unasked Questions

The MIT Sloan article doesn't address several critical questions about their streamlined approach:

What Leaders Should Actually Do

For executives considering their approach to scenario planning, I offer these recommendations based on what research actually supports:

The Ramírez team's diagnosis is correct. Traditional scenario planning is often too slow and disconnected from user needs. But their prescription may be worse than the disease. In the rush to make scenario planning faster and more efficient, we risk losing the organizational learning and assumption-challenging that makes it valuable in the first place.

The future of scenario planning isn't about speed. It's about embedding foresight thinking into organizational DNA so that leaders naturally consider multiple futures in every strategic conversation. That transformation can't be accelerated with AI. It requires patient, persistent cultural change.

Shell didn't survive the oil shocks because they had fast scenarios. They survived because scenario thinking became part of how their leaders understood the world. That's the real lesson we should take forward.

For more on evolving scenario planning techniques and examples, explore further here.