Why AI Automation Is More Marathon Than Sprint and What Leaders Must Do
By Staff Writer | Published: January 1, 2026 | Category: Digital Transformation
The narrative of rapid AI-driven job displacement misses a critical reality: automation advances unevenly, with friction factors determining which tasks transform quickly and which remain stubbornly human.
Why AI Automation Is More Marathon Than Sprint and What Leaders Must Do
The prevailing narrative around artificial intelligence and work has oscillated between two extremes: breathless predictions of mass unemployment on one hand, and dismissive assurances that this time is no different on the other. Both perspectives miss the nuanced reality unfolding in organizations today. In their recent MIT Sloan Management Review article, Will Drover and Laura Huang offer a more sophisticated framework for understanding how AI actually transforms work, arguing that automation follows an uneven path shaped by distinct friction factors. This perspective deserves serious attention from business leaders, though it also warrants critical examination and extension.
The Jagged Frontier Is Real But More Complex Than Presented
The authors correctly identify that AI's capabilities form what researchers have termed a jagged frontier, excelling dramatically at certain tasks while struggling with others. This observation aligns with empirical evidence from multiple domains. A 2023 study published in Science examined GPT-4's performance across professional tasks and found striking variability: the system matched or exceeded expert performance on standardized tests in law and medicine, yet struggled with tasks requiring spatial reasoning or multi-step physical world understanding.
However, the jagged frontier metaphor, while useful, obscures an important reality: the frontier itself is constantly shifting, and not always in predictable directions. Consider radiology, frequently cited as ripe for AI disruption. Early AI systems showed promise in detecting specific pathologies in controlled datasets, leading to predictions of radiologist obsolescence. Yet the reality has proven far more complex. AI has indeed transformed certain narrow detection tasks, but radiologists' roles have simultaneously expanded to include AI system oversight, integration of multimodal data, and complex clinical decision support that current AI cannot replicate.
This suggests that the three-stage model of assist, reshape, and replace, while conceptually clean, may not capture the recursive nature of human-AI work evolution. Roles don't necessarily progress linearly through these stages. Instead, they often oscillate, with humans reclaiming tasks from AI when systems prove unreliable, or finding that automation of routine tasks reveals new layers of complexity requiring human judgment.
The Friction Factor Framework Needs Expansion
The authors identify key friction factors including repetition, regulation, judgment, and human assurance that slow AI adoption. This framework provides valuable analytical purchase, but it requires significant expansion to fully account for the forces shaping AI integration in organizations.
First, the framework underemphasizes organizational and cultural friction. Research from MIT's Initiative on the Digital Economy reveals that technical capability represents only a fraction of successful AI implementation. A 2024 study examining AI adoption across 3,000 firms found that organizational factors like change management capability, worker trust, and middle manager buy-in predicted successful implementation far more reliably than technical readiness or task suitability for automation. Leaders facing resistance to AI-enabled process changes often discover that the primary friction isn't technical but social.
Second, the framework doesn't adequately address economic friction. Even when AI can technically perform a task, adoption depends on complex cost calculations that extend beyond simple labor replacement. Implementation costs, integration with legacy systems, ongoing maintenance, and risks of system failure all factor into adoption decisions. A manufacturing firm might technically be able to automate quality inspection, but if the fully-loaded cost of the AI system exceeds current labor costs when factoring in failure risks and integration complexity, rational economic actors will delay adoption regardless of technical capability.
Third, the authors mention regulation but don't fully explore how regulatory uncertainty creates distinct patterns of friction. In healthcare, financial services, and other heavily regulated industries, the absence of clear regulatory frameworks for AI accountability creates a chilling effect on adoption. Organizations rationally hesitate to deploy systems when liability frameworks remain undefined. This regulatory friction doesn't necessarily correlate with technical capability or task complexity.
The Task-Level Analysis Reveals Critical Leadership Opportunities
The article's emphasis on task-level rather than job-level analysis represents its most valuable contribution. The McKinsey research cited, which projects diverging demand trajectories for different occupations based on their task composition, points toward a fundamental shift in how leaders should think about workforce planning.
Traditional workforce planning operates at the job or role level, asking questions like: How many accountants do we need? How many customer service representatives? The task-level lens reframes these questions: What specific tasks comprise these roles? Which tasks face low friction for automation? Which require continued or expanded human involvement? This analytical shift enables far more sophisticated strategic planning.
Consider software development, where AI coding assistants like GitHub Copilot have achieved significant adoption. A task-level analysis reveals that these tools excel at code completion, boilerplate generation, and syntax correction—tasks that are repetitive, have clear correctness criteria, and involve minimal contextual judgment. They struggle with system architecture decisions, security considerations, and understanding complex business requirements. Rather than replacing developers, these tools are reshaping the role toward higher-level design and judgment while automating routine implementation.
This pattern appears across domains. In legal services, AI excels at document review and precedent search but struggles with legal strategy and client counseling. In financial services, AI handles transaction processing and fraud detection but requires human oversight for complex risk assessment and relationship management. The common thread: AI automation advances fastest for tasks that are repetitive, rule-based, and have clear success criteria, while human judgment remains essential for tasks involving ambiguity, context, and stakeholder management.
For leaders, this task-level perspective creates opportunities for proactive role redesign. Rather than waiting for AI to automate jobs, forward-thinking organizations can systematically identify high-friction automation tasks, invest in augmentation tools, and simultaneously redesign roles to emphasize uniquely human capabilities. This approach requires significant change management investment but offers a path toward productivity gains without the organizational trauma of mass displacement.
The Midpoint Scenario May Underestimate Transformation Speed
The article cites McKinsey's midpoint scenario projecting significant but manageable workforce shifts by 2030. This framing may provide false comfort to leaders unprepared for potentially faster transformation in specific domains.
Recent advances in large language models and multimodal AI systems have compressed timelines that seemed reasonable just two years ago. Tasks that appeared to require irreducible human judgment—creative writing, complex analysis, strategic planning—are increasingly performed at acceptable quality levels by AI systems. A 2024 study from researchers at MIT and the University of Pennsylvania found that ChatGPT-4 could perform at the level of skilled professionals on 80 percent of tasks across 19 different occupations, up from less than 50 percent just two years prior.
This acceleration creates particular challenges for organizations in the reshape stage. Companies investing heavily in augmentation strategies may find their newly redesigned roles become automatable faster than expected. The half-life of role redesign may be shrinking, requiring more flexible organizational structures and continuous reskilling rather than one-time role transformations.
Moreover, the McKinsey projections assume relatively linear AI capability growth. But AI development has historically proceeded in fits and starts, with sudden capability jumps following architectural innovations. The possibility of breakthrough advances in areas like reasoning, planning, or physical world understanding could rapidly shift tasks from high-friction to low-friction categories.
Leaders should therefore adopt scenario planning approaches that include not just midpoint projections but also accelerated transformation scenarios. What if capabilities expected in 2030 arrive in 2027? Which critical roles become vulnerable? What reskilling investments would need to accelerate? This kind of strategic contingency planning remains rare but increasingly necessary.
Human Assurance as Competitive Advantage
One of the friction factors the authors identify—human assurance—deserves particular attention as a strategic consideration. Many services maintain human involvement not because AI lacks capability but because customers value human interaction for trust, empathy, or status reasons.
This dynamic creates interesting strategic choices. Some organizations may choose to maintain human touchpoints as a premium service differentiator, even as competitors automate. Others may find that customer preferences shift faster than expected, as younger demographics show greater comfort with AI interaction.
Research from the Customer Contact Council found that customer preferences for human versus AI interaction vary dramatically by context. For routine transactions, most customers prefer speed and convenience over human interaction. But for high-stakes decisions, complaint resolution, or emotionally charged situations, preference for human interaction remains strong and shows little sign of changing.
This suggests a strategic segmentation opportunity. Rather than blanket automation or retention of human roles, sophisticated organizations will segment customer interactions by stakes, complexity, and emotional valence, deploying AI for routine interactions while preserving and even enhancing human involvement for high-value situations. This approach requires careful service design but offers a path toward cost efficiency without sacrificing customer satisfaction.
The Overlooked Challenge of AI-Enhanced Inequality
While the article focuses on task-level transformation, it doesn't adequately address how AI-driven automation may exacerbate workplace inequality. Research from the National Bureau of Economic Research has found that AI adoption tends to disproportionately benefit high-skill workers who can effectively direct and supervise AI systems, while displacing middle-skill workers whose tasks become automatable.
This dynamic creates particular challenges for organizational leaders committed to equity and inclusion. As AI automates routine tasks, organizations face pressure to shed workers who performed those tasks while retaining and rewarding workers who can effectively leverage AI for higher-order work. Without intentional intervention, this pattern risks creating a two-tier workforce.
Forward-thinking organizations are addressing this through aggressive reskilling programs that help workers transition from automatable tasks to AI-augmented roles. AT&T's Future Ready initiative, which has reskilled over 200,000 workers for digital roles, provides one model. But such programs require sustained investment and leadership commitment that extends beyond typical training budgets.
Leaders must also grapple with uncomfortable questions about the pace and scope of reskilling. Can all workers displaced by automation be successfully reskilled? What responsibility do organizations bear for workers whose skills become obsolete? These questions extend beyond operational considerations into the realm of corporate social responsibility and stakeholder capitalism.
Strategic Implications for Workforce Planning
The uneven nature of AI automation creates several imperatives for strategic workforce planning.
First, organizations need sophisticated task-level workforce analytics. Traditional headcount planning by role or function provides insufficient granularity. Leaders need detailed understanding of what tasks their workers actually perform, which tasks face low friction for automation, and which require continued human involvement. This requires investment in workforce analytics capabilities that most organizations currently lack.
Second, workforce planning must become more dynamic and scenario-driven. The days of five-year workforce plans based on linear extrapolation are over. Organizations need planning processes that continuously monitor AI capability advances, reassess automation friction for specific tasks, and adjust workforce strategies accordingly.
Third, reskilling must shift from episodic to continuous. When automation was confined to routine physical tasks, reskilling could occur through periodic interventions. As AI increasingly automates cognitive tasks, the half-life of skills shortens. Organizations need to build cultures and systems for continuous learning rather than point-in-time training.
Fourth, role design must become a core organizational capability. As the authors note, most jobs will evolve rather than disappear entirely. But this evolution requires intentional redesign of roles to emphasize tasks that remain distinctively human. This capability—understanding how to decompose and redesign roles around changing technology capabilities—will increasingly differentiate successful from struggling organizations.
The Leadership Challenge Goes Beyond Technical Understanding
The article provides a valuable framework for understanding AI's uneven progress, but implementing insights from this framework requires leadership capabilities that extend well beyond technical understanding of AI.
Leaders must navigate profound organizational change while maintaining employee trust and engagement. Workers understandably feel anxious about AI adoption, and reassurances that jobs will evolve rather than disappear ring hollow without concrete examples and committed reskilling investment.
Effective leaders in this transition demonstrate several characteristics. They communicate transparently about AI adoption plans and their implications for work. They involve workers in redesigning roles rather than imposing changes from above. They invest credibly in reskilling and show commitment to retaining workers through transitions. And they think carefully about how to maintain organizational culture and cohesion as work fundamentally changes.
Research on organizational change suggests that middle managers play a particularly critical role in AI adoption. They serve as translators between executive strategy and frontline reality, and their buy-in or resistance significantly impacts implementation success. Yet middle managers often feel most threatened by AI adoption, as many middle management tasks face moderate friction for automation. Organizations that successfully navigate AI transitions invest heavily in helping middle managers understand their evolving roles and develop capabilities for AI-augmented management.
Moving Forward With Strategic Clarity
The reality that AI automation progresses unevenly, with friction factors shaping adoption speed, should neither comfort nor alarm leaders. Instead, it should drive strategic clarity and action.
The uneven nature of AI progress means organizations have time to prepare, plan, and invest in workforce transitions. But it also means that competitive advantage will increasingly flow to organizations that understand friction factors, identify low-friction automation opportunities, and successfully redesign work around human-AI collaboration.
Leaders should resist both extremes—neither assuming AI will rapidly automate entire functions, nor dismissing AI as merely the latest technology hype. The truth lies in careful, task-level analysis of where AI can create value, what friction factors slow adoption, and how roles can be redesigned to combine AI efficiency with human judgment.
The organizations that thrive in this transition will be those that develop sophisticated capabilities in workforce analytics, role redesign, and continuous reskilling. They will segment work carefully, automating where friction is low while investing in human capabilities where friction remains high. And they will recognize that successful AI adoption is ultimately less about technology selection than about organizational change management.
The forces shaping AI's uneven progress—judgment requirements, regulation, human assurance needs, and error tolerance—provide a useful framework for analysis. But leaders must extend this framework to include organizational, cultural, and economic friction factors. They must plan not just for midpoint scenarios but for potential acceleration of AI capabilities. And they must grapple honestly with how AI-driven automation may exacerbate inequality while building organizational capabilities to navigate these challenges.
The future of work won't be determined by AI capabilities alone, as the authors correctly note. But neither will it be determined primarily by the technical friction factors they identify. Instead, it will be shaped by the quality of leadership in organizations implementing AI, the thoughtfulness of role redesign, the commitment to reskilling workers, and the organizational capabilities for managing continuous change. The marathon has begun, but success requires more than understanding the terrain. It requires building organizational endurance, adaptability, and commitment to bringing workers along on the journey.
For more insights on shaping the future of AI in the workplace, you can explore this comprehensive analysis by MIT Sloan Management Review.