Why Intelligence Not Data Defines the New Competitive Advantage
By Staff Writer | Published: February 13, 2026 | Category: Strategy
The shift from selling data to selling intelligence powered by generative AI promises to unlock unprecedented value, but only for organizations that can navigate complex technical, organizational, and ethical challenges.
Generative AI and the Shift From Data to Intelligence
The promise is seductive: Companies sitting on mountains of underutilized data can now, thanks to generative AI, transform those assets into autonomous intelligence engines that drive decisions and actions in real time. According to a recent McKinsey analysis, this represents not merely an incremental improvement in data monetization but a fundamental reimagining of how organizations extract value from information.
The thesis, articulated by Ben Ellencweig, Guilherme Cruz, and Vishnu Kamalnath, centers on a provocative claim that data itself has become commoditized. The real competitive advantage now lies in intelligence: the ability to not just analyze data but to act on it autonomously within business workflows. Their research indicates that 86 percent of business leaders believe their companies possess underutilized assets, with data topping the list at 28 percent.
This argument arrives at a critical inflection point. After decades of investment in data infrastructure, warehouses, and analytics, many organizations have achieved data competency without commensurate business impact. The McKinsey team suggests generative AI provides the missing link, enabling what they call a leap up the data, information, knowledge, and wisdom (DIKW) pyramid toward true actionable intelligence.
The Core Thesis: From Data to Intelligence
The central premise rests on generative AI's unique capabilities. Unlike previous analytics approaches, gen AI can process unstructured data at scale, create semantic connections across disparate data sources, and generate contextualized, personalized insights that adapt in real time. More significantly, through agentic AI architectures, these systems can move beyond recommendation to autonomous action.
Consider the Walmart example prominently featured in the analysis. Walmart Data Ventures launched Scintilla (originally Luminate) in October 2021, transforming the retailer's massive shopper behavior dataset into an intelligence platform for suppliers. The results appear compelling: 80 percent quarter-over-quarter revenue growth in year one, 173 percent year-on-year customer growth by 2024, and 100 percent renewal rates with three-year commitments. The recent addition of Scintilla Insights Activation takes this further, using AI to automatically convert insights into real-time audience targeting and advertising recommendations.
This progression illustrates the McKinsey framework. Walmart moved from possessing raw transaction data (data layer) through organized reporting (information layer) to predictive analytics about shopper behavior (knowledge layer) and finally to automated, real-time marketing activation (wisdom layer). Gen AI, the authors argue, accelerates movement through these stages and makes the wisdom layer economically viable at scale.
The Unstructured Data Opportunity
One of the more compelling supporting arguments addresses unstructured data, which comprises over 90 percent of organizational information according to IDC research. Call transcripts, emails, social media posts, images, and documents have historically remained largely untapped for monetization because traditional analytics struggled to process them efficiently.
Generative AI's natural language processing capabilities change this equation. The technology can extract structured meaning from text, create knowledge graphs that map relationships across data types, and build semantic layers that provide unified, contextualized views. This connectivity transforms isolated signals into strategic intelligence.
The practical example offered involves call center operations. A customer complaint transcript can now be automatically linked to transaction history, product SKUs, service tickets, geographic location, and purchase preferences. This enables the call center operator to send a personalized digital coupon redeemable at the customer's nearest store. The system understands not just individual data points but their relationships and implications.
Yet this capability also surfaces significant questions about implementation complexity. Building knowledge graphs and ontologies that accurately map business domain relationships requires substantial domain expertise and data engineering. The semantic layer must be maintained as business contexts evolve. For most organizations, this represents a capability gap that cannot be quickly closed.
The Declining Data Brokerage Model
The analysis makes a persuasive case that traditional data brokerage faces existential pressure. Three forces converge: raw data prices are falling as data becomes commoditized, regulatory scrutiny is tightening access to personal information, and synthetic data generated by AI models offers comparable performance at lower cost and risk.
Gartner research cited in the article predicts that by 2026, three-quarters of businesses will use generative AI to create synthetic customer data, up from less than 5 percent in 2023. This shift undermines the value proposition of data brokers who aggregate and resell information. If synthetic data can train models effectively while eliminating privacy concerns and licensing costs, why purchase real-world datasets?
This argument, however, may underestimate synthetic data's limitations. While synthetic data works well for certain use cases, particularly augmenting training datasets or filling gaps, it cannot fully replace real-world data for understanding actual customer behavior, market dynamics, or emerging trends. Synthetic data reflects patterns in the models that generate it, potentially missing novel signals or reinforcing existing biases.
The regulatory squeeze is real, particularly in jurisdictions with stringent data protection regimes like the European Union's GDPR. But this creates opportunities for compliant data monetization approaches, not just threats. Organizations that build privacy-preserving data products with proper consent frameworks may command premium pricing precisely because compliant data becomes scarcer.
The Intelligence Products Vision
The McKinsey framework identifies two primary modes for gen-AI-enabled data products: personalized content generation and real-time decision-making through agentic AI. The personalized content approach positions AI as an always-on analyst, automatically generating customized reports, commentary, and recommendations for each customer.
The automotive manufacturer example illustrates this well. By building an AI-powered analytics stack that integrated ERP, CRM, and external data into a digitized installed base, the company created a lead engine generating personalized product and service recommendations. Gen AI qualified leads and crafted tailored outreach integrated into CRM workflows, driving 15 to 25 percent increases in qualified leads and 25 to 30 percent uplifts in parts and services sales.
This represents a meaningful improvement over static analytics products. Rather than providing generic benchmarks or dashboards requiring interpretation, the system delivers decision-ready recommendations contextualized to specific situations. The value proposition shifts from “here's data” to “here's what you should do.”
The agentic AI vision goes further, envisioning autonomous systems that not only recommend actions but execute them. The financial services collections optimization example shows AI analyzing data to identify which customers to contact, when, and through which channels, then making these decisions in real time to maximize promise-to-pay rates.
Looking ahead, the authors envision AI agents acting fully autonomously across complex environments. In e-commerce, embedded agents could analyze real-time user behavior to trigger intelligent upselling. Service providers could encapsulate domain expertise into autonomous agents sold as white-labeled APIs delivering “intelligence-as-a-product.”
This vision, while compelling, raises critical questions about accountability, transparency, and control that receive limited attention in the analysis. When AI agents make autonomous decisions affecting customers or business operations, who bears responsibility for errors or biases? How do organizations maintain oversight without undermining the efficiency gains from automation? What happens when multiple AI agents with different objectives interact in unexpected ways?
The Implementation Reality
The article's most valuable contribution may be its six-pillar implementation framework: strategy, go-to-market, technology, people, operations, and capital. This acknowledges that gen-AI-enabled data monetization requires enterprise-wide transformation, not just technology deployment.
The strategy pillar emphasizes grounding data products in proprietary advantages. Walmart's scale of shopper data, Bloomberg's 40 years of financial information, or domain-specific infrastructure create defensible moats. Organizations without clear proprietary advantages may struggle to differentiate AI-powered data products in increasingly crowded markets.
The go-to-market evolution from traditional software licensing to usage-based, outcome-based, and adaptive pricing models reflects the shift from selling data to selling intelligence. When products continuously adapt and deliver personalized value, rigid pricing feels misaligned. Post-sale customer success becomes critical for identifying new use cases and driving expansion revenue.
The technology requirements are substantial: cloud-native infrastructure, modular architectures, automated data pipelines, multi-agent systems, and robust governance frameworks. Bloomberg's BloombergGPT, a 50-billion-parameter domain-specific model trained on 40 years of financial data, illustrates the scale of investment required for differentiated AI capabilities.
The people challenge may prove most difficult. The blend of technical AI expertise, commercial acumen, and product thinking required to build and monetize AI-powered data products is exceptionally rare. The example of a financial institution building a 700-person data chapter with senior executive oversight demonstrates the organizational commitment required.
Operational complexity increases dramatically with gen-AI products. LLM governance, versioning, observability, regulatory compliance, data rights, and intellectual property management all require new processes and capabilities. The authors rightly highlight that unclear ownership frameworks create legal and reputational risks, but many organizations are still developing these frameworks.
The capital intensity cannot be overstated. Beyond initial R&D, ongoing costs for model training, inference at scale, low-latency delivery, and continuous retraining can quickly erode margins. Organizations must approach funding as both growth enabler and margin safeguard, aligning capital deployment with product maturity and customer adoption.
The Three-Stage Maturity Journey
The proposed maturity model provides useful scaffolding: internal optimization, opportunistic monetization, and full marketplace monetization. Most organizations realistically operate in stage one, using gen AI internally for efficiency gains. The leap to stage three, building standalone data businesses with robust commercial models, remains aspirational for all but the most sophisticated organizations.
This progression acknowledges that data monetization maturity develops over time. Organizations must demonstrate internal value before external customers will pay for data products. Early external monetization often occurs through partnerships with existing customers rather than open market sales.
However, the framework may underestimate how long these transitions take and how many organizations will successfully complete them. Building a scalable data product business requires capabilities many traditional enterprises lack: product management, developer relations, API design, marketplace operations, and usage-based billing infrastructure.
Critical Gaps and Counterarguments
Several important considerations receive insufficient attention in the analysis. First, the regulatory environment for AI-powered data products remains highly uncertain. The EU AI Act, potential US federal AI legislation, and sector-specific regulations like HIPAA create compliance complexity that could slow adoption significantly.
Second, the ethical implications of autonomous AI agents making decisions affecting individuals deserve deeper examination. Algorithmic bias, fairness, explainability, and accountability are not merely compliance checkboxes but fundamental design challenges. The brief mention of responsible AI frameworks understates the difficulty of implementing these practices effectively.
Third, customer receptiveness to AI-generated intelligence versus human analysis may vary significantly by domain. In high-stakes contexts like healthcare, legal, or financial advice, customers may demand human oversight and accountability that limits the economic advantages of automation.
Fourth, the competitive dynamics of AI-powered data products remain unclear. If multiple providers can access similar foundation models and data sources, sustainable differentiation may prove elusive. Network effects and data flywheels that create winner-take-most dynamics in some markets may not materialize in others.
Fifth, the analysis focuses primarily on successful examples like Walmart and Bloomberg without examining failures. Selection bias makes it difficult to assess base rates of success or identify common failure modes. How many organizations have invested heavily in AI-powered data products without achieving commercial viability?
Alternative Perspectives
Academic research on the DIKW pyramid, while widely referenced, has faced criticism for oversimplifying how knowledge is created and applied. The neat progression from data to wisdom may not reflect how intelligence actually develops in complex organizational contexts. Tacit knowledge, contextual understanding, and human judgment may not be as easily automated as the framework suggests.
Some data strategy experts argue that focusing on “intelligence” over “data” creates a false dichotomy. High-quality, well-governed data remains foundational for any AI application. Organizations that neglect data fundamentals in pursuit of AI-powered intelligence may build on unstable foundations.
Privacy advocates raise concerns that the push toward more sophisticated data monetization, even with AI-generated synthetic data and anonymization, perpetuates surveillance capitalism. The ability to extract ever more granular insights from data may conflict with individuals' reasonable expectations of privacy.
Technology analysts note that gen AI capabilities, while advancing rapidly, still face significant limitations: hallucinations, inconsistent performance, high computational costs, and difficulty with reasoning and planning. Autonomous agentic systems remain largely experimental rather than production-ready for most use cases.
Practical Recommendations
For business leaders considering AI-powered data monetization strategies, several principles emerge:
- Start with proprietary advantages. Data products succeed when they leverage unique data assets, domain expertise, or customer relationships that competitors cannot easily replicate. Generic data or commoditized analytics face margin pressure regardless of AI capabilities.
- Prioritize internal validation before external monetization. Demonstrating measurable business impact from AI-powered intelligence internally builds organizational capability and provides proof points for external customers.
- Invest in foundational capabilities before advanced use cases. Data governance, quality, integration, and security must be robust before layering AI capabilities. Technical debt in data infrastructure will constrain AI initiatives.
- Build cross-functional teams combining data science, product management, commercial strategy, and domain expertise. The capability gap is real; organizations must develop or acquire rare talent combinations.
- Design for transparency and oversight even when building autonomous systems. Human-in-the-loop architectures that preserve accountability while gaining efficiency advantages represent a pragmatic middle ground.
- Approach pricing and business models iteratively. Usage-based and outcome-based pricing align incentives but require different sales motions, customer success approaches, and revenue recognition practices than traditional software licenses.
- Plan for regulatory compliance and responsible AI from the start. Retrofitting governance frameworks onto existing AI systems is far more difficult and expensive than building them in from the beginning.
The Path Forward
The McKinsey analysis makes a strong case that generative AI fundamentally changes the economics of data monetization. The ability to process unstructured data at scale, generate personalized insights, and embed intelligence directly into workflows creates opportunities that did not exist with previous analytics approaches.
However, the path from data assets to monetizable AI-powered intelligence is far more complex than the article suggests. Success requires not just technology deployment but enterprise-wide transformation across strategy, operations, talent, governance, and business models. The capital intensity, talent scarcity, and implementation complexity will limit which organizations can successfully execute this vision.
The organizations most likely to succeed possess several characteristics: proprietary data advantages that create defensible moats, existing product and commercial capabilities to take solutions to market, substantial capital to invest in AI infrastructure and talent, risk management and governance frameworks to address ethical and regulatory challenges, and executive commitment to multi-year transformation journeys.
For most organizations, the realistic path involves incremental progress: using gen AI for internal optimization, selectively monetizing insights with existing customers, and building capabilities over time rather than attempting to leapfrog to fully autonomous intelligence marketplaces.
The shift from data to intelligence as the primary value driver is real and significant. But the timeline for widespread adoption of AI-powered autonomous data products will likely extend longer than technology optimists predict. Implementation challenges around governance, trust, regulatory compliance, and organizational change will slow progress even as the underlying technology advances rapidly.
Business leaders should view gen-AI-enabled data monetization as a strategic imperative requiring sustained investment and transformation, not a quick win from deploying new technology. Those who approach it with appropriate ambition tempered by realistic assessment of implementation challenges will be best positioned to capture value as the market matures.
The future envisioned where AI agents autonomously negotiate data trades, create synthetic data exchanges, and orchestrate cross-industry data syndication may indeed arrive. But between today's reality and that vision lies substantial work to build technical capabilities, develop governance frameworks, navigate regulatory uncertainty, and earn customer trust in AI-generated intelligence. The organizations that execute this multi-year transformation most effectively will define the next generation of data-driven competitive advantage.