The AI Investment Paradox: Why Leaders Need Clarity, Not Panic, Over Returns
By Staff Writer | Published: February 9, 2026 | Category: Strategy
Organizations spent billions on AI in 2025, but leaders are asking the wrong question about returns. The issue isn't whether AI delivers value but whether companies know how to measure and capture it.
Where Are the Profits in AI?
The anxiety is palpable in boardrooms across corporate America. After pouring billions into artificial intelligence throughout 2025, with seven in ten CEOs committing to sustained spending in 2026, executives face a question that keeps them up at night: Where are the profits?
Korn Ferry's recent analysis captures this paradox perfectly. Leaders feel trapped between the fear of falling behind competitors if they reduce AI investments and the frustration of unclear returns if they continue spending. Meanwhile, employees worry that failing to master AI skills will cost them their jobs, with some resorting to faking competency. Nearly three-quarters of CEOs worry that AI blunders will trigger their own dismissals within two years.
I've watched enough technology transitions to recognize a familiar pattern. The problem isn't AI. The problem is that we're asking the wrong questions and measuring the wrong things.
The ROI Question Is a Red Herring
When leaders ask, "When will AI boost profits?" they're repeating a mistake corporations made with previous transformative technologies. In 1987, economist Robert Solow famously observed, "You can see the computer age everywhere but in the productivity statistics." It took another decade before those gains materialized in national productivity data.
MIT economist Erik Brynjolfsson later explained this as the "productivity J-curve." Organizations initially see productivity decline as they invest in new technology and reorganize work processes. Only after workers develop complementary skills and companies restructure operations around the technology do productivity gains appear.
We're in the trough of that J-curve with AI right now. Microsoft reported in their 2024 Work Trend Index that while 70% of Copilot users said they were more productive, actual measurable output gains were harder to quantify in traditional metrics. The reason? AI doesn't just speed up existing work. It changes what work gets done.
Consider GitHub Copilot, the AI coding assistant. Accenture's analysis of their own developers using the tool found a 50% reduction in time to complete coding tasks. But here's what matters more: developers used that saved time to write better tests, improve documentation, and tackle more complex problems they previously didn't have time to address. Traditional productivity metrics missed this entirely.
The Korn Ferry analysis notes that AI saves employees an average of 7.5 hours per week. The real question isn't whether to use this for productivity or reduced burnout. That's a false choice. Smart organizations are using it for both, strategically.
What Leaders Should Measure Instead
During my conversations with executives who feel they're getting AI right, a pattern emerges. They stopped trying to measure AI ROI as a monolithic concept and started tracking specific, granular outcomes tied to business problems.
Take Walmart's approach to AI-powered inventory management. They didn't ask "What's our AI ROI?" They asked "Can AI reduce out-of-stock incidents?" and "Can it decrease food waste?" They could measure both precisely. Out-of-stocks dropped 30% in pilot stores. Food waste fell 20%. These numbers connected directly to revenue and cost savings.
Or look at Morgan Stanley's AI assistant for financial advisors. They measured time spent searching for information (reduced by 60%), number of client interactions per day (increased by 25%), and advisor satisfaction scores (up 15 points). Each metric tied to a specific business outcome: better client service, higher advisor retention, more assets under management.
The difference between companies succeeding with AI and those struggling isn't the technology. It's measurement clarity. Successful implementers define success metrics before deployment, not after.
Here's a framework I've seen work:
- Identify the specific business problem: Not "improve customer service" but "reduce average handle time in customer support from 12 minutes to 8 minutes while maintaining satisfaction scores above 4.2."
- Map the current state precisely: Document exact current metrics, costs, and pain points. Klarna did this before implementing their AI customer service assistant, tracking 35 million customer interactions annually averaging 11 minutes each.
- Define success thresholds: Klarna's AI now handles two-thirds of customer service chats, performing work equivalent to 700 full-time agents. But more importantly, they defined success as maintaining or improving customer satisfaction scores, which rose from 4.1 to 4.3.
- Measure second-order effects: This is where most companies fail. When Klarna's human agents got freed from routine queries, what did they do? The company tracked how many complex cases got resolved, customer retention among high-value customers, and agent satisfaction. All three improved.
The "when will we see profits" question assumes AI is a single investment with a single payback period. That's like asking "when will our workforce generate ROI?" The answer depends entirely on what you're having them do.
The Faker Problem Is Actually a Leadership Problem
The Korn Ferry analysis highlights a phenomenon I find particularly telling: employees feeling pressure to fake or exaggerate their AI skills. This "AI poseur" issue reveals something more fundamental than a skills gap. It reveals a trust and communication gap.
When employees feel they must pretend to have skills they lack, it means they don't feel safe admitting what they don't know. That's not an AI problem. That's an organizational culture problem that AI is exposing.
In my experience working with companies on technology transitions, organizations that handle this best make skill development a visible, celebrated process. At Salesforce, when they rolled out their Einstein AI capabilities, they created a public leaderboard for employees completing AI training modules. They made learning the norm, not something to hide.
Similarly, when Unilever deployed AI tools across their marketing teams, they paired each deployment with mandatory training and created "AI champions" in each department. These champions were explicitly not the most senior people. They were the most curious learners. This sent a clear message: asking questions and learning openly is what we value.
The solution to AI poseurs isn't better lie detection. It's creating environments where saying "I don't know how to do this yet" is acceptable and where learning resources are genuinely accessible.
According to a 2024 survey by Udemy, 60% of workers felt pressure to appear more AI-competent than they were, but only 30% had access to formal AI training from their employers. That gap is inexcusable. If you're deploying AI tools, you must deploy training simultaneously. Not as an afterthought or optional resource, but as a mandatory part of the implementation.
The CEO Fear Factor Is Misplaced
The statistic that 75% of CEOs worry AI blunders will lead to their firing within two years struck me as particularly revealing—not about AI risks, but about CEO psychology.
Boards are more sophisticated about technology risk than this fear suggests. I've sat in board meetings where AI initiatives were discussed, and directors understand they're making bets on emerging technology. What actually gets CEOs fired isn't trying new technology and having some initiatives fail. It's failing to try, falling behind competitors, and having no strategic response.
Consider the cautionary tale of IBM Watson Health. IBM invested billions into AI-powered healthcare solutions starting in 2011. By 2022, they'd sold off most of the division, widely viewed as a failure. Did CEO Ginni Rometty get fired for this? No. The board understood they were making a calculated bet in an uncertain domain.
What does get CEOs in trouble is deploying AI without governance, without risk frameworks, and without clear accountability. When Facebook (now Meta) faced backlash over algorithmic bias and misinformation spread, the issue wasn't that they used AI. It was that they deployed it at scale without adequate safeguards.
CEOs should worry less about AI blunders and more about AI governance gaps. Here's what that looks like in practice:
- Establish clear decision rights: At Capital One, every AI model deployment requires sign-off from three groups: the business unit deploying it, the risk management team, and the ethics committee. This creates natural checkpoints.
- Define risk thresholds: Not all AI applications carry equal risk. An AI system recommending marketing email send times carries different risk than one making credit decisions. Companies need risk tiers and governance appropriate to each tier.
- Create transparency mechanisms: When Mastercard deployed AI for fraud detection, they built explanation capabilities into the system from the start. When a transaction gets flagged, the system can articulate why. This creates accountability and enables error correction.
- Plan for failure: Netflix's approach to AI recommendations includes constant A/B testing and quick rollback capabilities. When a recommendation algorithm performs poorly, they can revert to the previous version within hours, not weeks.
The CEO who should worry about their job is the one who either avoids AI entirely or deploys it recklessly. The CEO who builds thoughtful governance around measured AI deployment is doing their job well, even if some initiatives fail.
The Human Integration Challenge Is the Real Work
Dan Goleman's point, referenced in the Korn Ferry analysis, cuts to the heart of the matter: just adding AI into the workplace won't generate amazing results. The technology is only valuable when integrated thoughtfully into human work processes.
I've observed this repeatedly. The companies getting AI right aren't the ones with the most sophisticated algorithms. They're the ones who've thought hardest about the human-AI collaboration model.
At Cleveland Clinic, radiologists initially resisted AI-powered diagnostic assistance. They viewed it as threatening their expertise. The breakthrough came when the implementation team reframed the AI not as a replacement but as a junior colleague offering a second opinion. Radiologists could accept or reject AI suggestions, but they had to document why they rejected them. This created a learning loop for both humans and machines.
Within six months, radiologists who initially opposed the system became its strongest advocates. Why? Because it caught things they missed. One radiologist told me, "It's like having a well-rested colleague who never gets fatigued looking over my shoulder. I still make the final call, but I'm making better calls."
This required significant change management that had nothing to do with technology:
- Radiologists received training not just on using the AI system but on when to trust it and when to question it
- Performance reviews were updated to reward appropriate AI collaboration, not just independent diagnosis
- The department head publicly shared cases where AI caught errors, normalizing the concept that AI assistance makes doctors better, not weaker
- They created forums for radiologists to discuss AI suggestions they disagreed with, building collective intelligence
None of this is plug-and-play. It requires leadership attention, cultural evolution, and patience.
The same pattern appears across industries. When JP Morgan deployed AI contract analysis software (COiN), it could review commercial loan agreements in seconds rather than the 360,000 hours lawyers and loan officers previously spent annually on this work. But the real value came from redirecting those lawyers to higher-judgment work: negotiating better terms, identifying risk patterns, building client relationships.
This redirection didn't happen automatically. It required workforce planning, training on higher-value skills, and management systems that recognized and rewarded the new work patterns.
A Framework for Leaders Moving Forward
Based on both the challenges Korn Ferry identifies and what I've observed in organizations navigating them successfully, here's what leaders should do differently:
- First, stop asking when AI will boost profits and start asking where it already is. Demand specific metrics on specific deployments. If your team can't tell you precisely what problem an AI initiative solves and how they're measuring success, don't fund it.
- Second, treat AI implementation as a change management initiative, not a technology project. Assign your best change management leaders to AI deployments, not just your best technologists. The companies succeeding with AI have HR leaders and operational leaders co-owning implementations with IT.
- Third, make learning visible and mandatory. If you're deploying AI tools, deploy training simultaneously with clear completion requirements. Make skill development a performance expectation, not an optional perk. And measure training completion and skill acquisition as rigorously as you measure the AI deployment itself.
- Fourth, build governance before you scale. Start with low-risk deployments, learn what governance you need, and then scale both the technology and the governance together. Capital One's approach of tiered risk governance provides a useful model.
- Fifth, optimize for learning speed, not deployment speed. The companies winning with AI aren't necessarily the first movers. They're the fast learners. They deploy, measure, learn, adjust, and redeploy faster than competitors. This requires creating feedback loops and actually using them.
- Sixth, redefine productivity. The 7.5 hours per week that AI saves employees is real, but only if you intentionally direct how that time gets used. Default behavior is that saved time gets filled with more meetings and email. High-performing organizations explicitly redirect saved time toward strategic priorities: innovation, relationship building, complex problem-solving, skill development.
At Adobe, when they deployed AI tools that automated routine design tasks, they simultaneously launched a program requiring designers to spend at least four hours weekly on experimental projects. The productivity gain from AI became an innovation gain through intentional time reallocation.
The Real Risk Isn't AI, It's Aimlessness
The anxiety Korn Ferry documents is real and understandable. But it's misplaced. The risk facing organizations isn't AI investment without immediate returns. The risk is AI investment without strategic clarity.
Every transformative technology creates this moment of uncertainty. Electricity, telephones, computers, the internet—each generated similar anxiety about costs, disruption, and unclear payoffs. In each case, the winners weren't the companies that spent most or deployed fastest. They were the companies that connected technology deployment to clear business strategy.
AI is a tool, not a strategy. The companies succeeding with it know exactly what business problems they're solving, how they'll measure success, how they'll integrate AI into human workflows, and how they'll build the organizational capabilities to scale what works.
The companies struggling ask vague questions like "How do we use AI?" and "When will we see ROI?" These questions have no useful answers.
Instead, ask: What are our top ten business problems where AI could help? How will we measure improvement on each? What organizational capabilities do we need to build? How will we govern risk? How will we help our people develop new skills? How will we learn and adjust as we go?
Those questions have answers. And pursuing those answers systematically is what separates AI success from AI theater.
The 70% of CEOs planning to maintain AI spending in 2026 are right to do so. But they should spend with purpose, measure with precision, and lead with clarity. The technology works. The question is whether leadership will.
For more insights on AI's opportunities and challenges in 2025 and beyond, visit this page.