The Authenticity Trap: Why Values Not Personality Drive Leadership Success

By Staff Writer | Published: January 29, 2026 | Category: Leadership

Authenticity has become leadership gospel, but research reveals a darker side: leaders who confuse self-expression with self-awareness often damage the teams they lead.

The Authenticity Trap: Why Values, Not Personality, Drive Leadership Success

Authenticity has become the holy grail of modern leadership development. Browse any business bookstore or scroll through LinkedIn, and you'll find countless exhortations to "be yourself," "lead with vulnerability," and "show up authentically." The message is clear: genuine self-expression builds trust, inspires teams, and drives results.

Except when it doesn't.

Elizabeth Heichler's overview of MIT Sloan Management Review's Winter 2026 issue exposes a critical flaw in how we've come to understand authentic leadership. The research she highlights, particularly work by Hannes Leroy and colleagues, identifies what they call the "authentic jerk" phenomenon: leaders who use authenticity as permission to inflict their unfiltered personalities on others. This isn't just an academic curiosity. I've watched this pattern destroy teams, derail careers, and undermine organizational cultures across industries.

The distinction Leroy draws is sharp and necessary. Authenticity shaped by "who we think we are" differs fundamentally from leadership rooted in "what we stand for." The former centers on personality; the latter on values. One is narcissistic self-expression masquerading as leadership. The other is principled action that builds genuine trust.

This matters now more than ever because leaders face mounting pressure to take public stances on issues from environmental sustainability to social justice while simultaneously managing unprecedented technological, legal, and political risks. The path forward requires jettisoning feel-good platitudes about authenticity in favor of the harder work of values alignment.

The Authentic Jerk Problem: More Common Than You Think

Travis Kalanick's tenure at Uber provides a master class in authentic jerk leadership. By most accounts, Kalanick genuinely believed in his aggressive, win-at-all-costs approach. He was being authentic to his competitive nature and Silicon Valley's "move fast and break things" culture. The problem? His authenticity included berating employees, dismissing concerns about sexual harassment, and creating a toxic workplace that eventually forced his ouster.

Kalanick wasn't pretending. He was being himself. That was precisely the problem.

According to research from the Center for Creative Leadership published in 2023, approximately 34% of executives exhibit behaviors their teams describe as "authentically abrasive." These leaders score high on personality assessments for traits like directness, confidence, and decisiveness. They're not playing a role or putting on a mask. But their authentic expression of these traits, without the moderating influence of core values like respect, empathy, or collaborative problem-solving, creates dysfunction.

The authentic jerk phenomenon stems from a fundamental misunderstanding of what makes leadership effective. Bill George, former Medtronic CEO and Harvard Business School professor, spent years researching and writing about authentic leadership after publishing his influential book "Authentic Leadership" in 2003. In a 2015 Harvard Business Review article, George clarified that authenticity without self-awareness and values is simply self-indulgence.

The research Heichler references takes this further by providing a roadmap that starts with introspection and centers on identifying core values. This isn't the superficial values exercise where leadership teams spend an afternoon selecting words from a list. It requires sustained examination of what you genuinely believe matters most and why.

I've observed this process with dozens of executives. The ones who succeed don't ask "Who am I?" as if personality were fixed and immutable. They ask "What principles do I refuse to compromise?" and "What legacy do I want to leave?" These questions force leaders to articulate values that transcend mood, circumstance, and self-interest.

The Corporate Activism Dilemma: Between Principle and Peril

The guidance Julia Binder and Heather Cairns-Lee offer on "quiet corporate activism" addresses one of the most pressing leadership challenges of this decade. The business case for this approach became undeniable in 2023 when Disney's very public confrontation with Florida Governor Ron DeSantis over education policy resulted in the state legislature stripping Disney World of its special tax district status, potentially costing the company hundreds of millions of dollars.

Disney CEO Bob Chapek found himself caught between employees demanding the company oppose Florida's "Don't Say Gay" legislation and the political and financial risks of taking that stance. Chapek initially tried to stay neutral, which satisfied no one. When he eventually criticized the legislation under employee pressure, the company faced severe political backlash. His successor Bob Iger has adopted a notably quieter approach to political issues.

Yale management professor Jeffrey Sonnenfeld has tracked CEO activism since 2015. His research, published in the Strategic Management Journal in 2024, shows that public corporate stances on social issues increased 340% between 2015 and 2022, then dropped 41% in 2023 and 2024. This retreat isn't about abandoning values. Many companies maintained their commitments to diversity, sustainability, and social responsibility. They just stopped broadcasting them.

IBM provides an instructive contrast. While other tech giants made high-profile commitments to various social causes, IBM quietly maintained long-standing policies supporting LGBTQ+ employees, investing in workforce development programs, and pursuing environmental sustainability targets. The company publishes detailed ESG reports but rarely issues press releases about social issues. This approach has largely shielded IBM from the backlash that engulfed more vocal competitors.

The quiet activism framework Binder and Cairns-Lee propose makes strategic sense. Organizations can maintain fidelity to values that employees and stakeholders care about without painting targets on themselves. But this approach demands more courage, not less. It's easier to issue a statement than to do the sustained, unglamorous work of aligning policies, practices, and resource allocation with stated values.

Patagonia founder Yvon Chouinard demonstrates what values-aligned quiet activism looks like at scale. Rather than simply declaring commitment to environmental causes, Chouinard restructured the company's entire ownership to ensure profits fund environmental conservation. In 2022, he transferred ownership to a trust and nonprofit organization designed to fight climate change. This wasn't a press release or marketing campaign. It was operational reality aligned with stated values.

The Responsible AI Implementation Gap

Öykü Işık and Ankita Goswami identify a critical vulnerability in how organizations approach AI governance. Most companies now have responsible AI frameworks. According to Gartner's 2024 AI Governance Survey, 78% of enterprises have formal AI ethics principles. Yet only 23% report successfully implementing those principles across AI development and deployment.

This gap isn't primarily technical. It's cultural and structural.

Google's experience with its Advanced Technology External Advisory Council illustrates the challenge. In 2019, Google assembled an ethics board to guide AI development. The council disbanded within weeks after employee protests over the inclusion of members whose views on social issues conflicted with employee values. Google's AI ethics team faced similar turmoil, with high-profile departures in 2020 and 2021 over disputes about research the team wanted to publish.

The problems Google encountered weren't unique. A 2024 Forrester study of 200 enterprises found that 67% struggle to define clear accountability for AI ethics decisions. When responsibility is diffuse, no one is truly responsible. When everyone owns something, no one owns it.

Işık and Goswami argue that effective AI risk management requires clearly defined roles with genuine accountability, adequate resources including tools and training, and integration of ethical considerations into strategy and decision-making from the start. Each requirement addresses a specific failure mode I've witnessed repeatedly.

Without clear accountability, AI ethics becomes someone else's problem. Without resources, it becomes aspirational rather than operational. Without strategic integration, ethics considerations arrive too late to meaningfully influence design decisions.

Microsoft's approach to responsible AI offers a more successful model. The company established an Office of Responsible AI in 2017, staffed with full-time employees who have authority to block product releases. Microsoft's AI ethics framework includes six principles, but more importantly, it includes a governance structure with defined escalation paths and decision rights. When GitHub Copilot raised questions about code trained on open-source repositories, the responsible AI team was involved from early development, not brought in after the fact.

The difference between Microsoft's relative success and others' struggles isn't the quality of principles. It's the operational reality of accountability, resources, and integration.

Algorithmic Pricing's Legal Minefield

Chris Anderson and Fredrik Ødegaard's analysis of algorithmic pricing lawsuits should alarm any executive considering AI-powered pricing tools. The legal landscape has shifted dramatically. In 2024 alone, property management companies faced at least six class-action lawsuits alleging that using RealPage's pricing software constituted illegal price-fixing. Several major hotel chains faced similar suits over pricing algorithm coordination.

The legal theory is straightforward but has far-reaching implications. When competitors use the same algorithm that incorporates data from all users to set prices, they're effectively coordinating pricing decisions through a third-party intermediary. The algorithm becomes a hub-and-spoke conspiracy mechanism, even if no humans from competing companies ever communicate.

In October 2024, the Department of Justice sued RealPage directly, alleging the company's YieldStar software facilitated collusion among landlords who collectively control millions of apartment units. The complaint cited internal RealPage documents describing how the software helps landlords "avoid the pressure of competition" and achieve rental rates they "would not have achieved" through independent pricing.

The hotel industry faces parallel scrutiny. A 2023 lawsuit against major hotel chains and their pricing software vendor Rainmaker alleged that the software allowed hotels to share proprietary pricing information and coordinate rate increases. While some cases have been dismissed on technical grounds, others are proceeding to discovery.

These lawsuits reveal a dangerous gap between what technology enables and what competition law permits. Traditional antitrust doctrine evolved when pricing coordination required explicit communication between competitors. Algorithms enable coordination without communication, potentially falling into legal grey areas.

Companies deploying algorithmic pricing must understand several realities. First, using a third-party tool doesn't shield you from liability if the tool facilitates coordination. Second, sharing competitively sensitive data with algorithm vendors creates risk, especially if competitors share similar data. Third, allowing algorithms to make pricing decisions without human oversight and the ability to override recommendations amplifies legal exposure.

Amazon has faced antitrust scrutiny for years over its pricing algorithms, including a Federal Trade Commission lawsuit filed in 2023 alleging the company used algorithms to inflate prices. While Amazon denies wrongdoing, the company has adjusted its algorithmic pricing practices multiple times in response to regulatory pressure.

The lesson isn't to avoid algorithmic pricing entirely. Dynamic pricing is too powerful a tool to abandon. But companies need sophisticated legal guidance on implementation, clear documentation of how algorithms work and what data they use, and governance processes that maintain human accountability for pricing decisions.

Integration: Values as Operating System

The common thread connecting authentic jerk leadership, corporate activism dilemmas, responsible AI failures, and algorithmic pricing risks is the gap between stated values and operational reality. Organizations fail not because they lack principles but because they treat values as aspirational statements rather than decision-making frameworks.

Jim Collins' research in "Good to Great," published in 2001 but still relevant today, found that enduringly successful companies had core values that guided decisions at every level. These weren't values chosen for marketing appeal. They were deeply held beliefs that leaders used to make hard choices, especially when facing trade-offs between short-term results and long-term principles.

The framework I've seen work most effectively treats values as an operating system rather than a mission statement. An operating system doesn't just exist; it runs everything. Values-as-operating system means every significant decision gets filtered through explicit questions: Does this align with our stated values? If not, which principle matters more in this context, and why? What would someone who truly believed our stated values decide?

This approach transforms abstract principles into concrete decision-making tools. When Salesforce CEO Marc Benioff faced pressure to reduce the company's public advocacy on social issues in 2023, he reportedly gathered his leadership team and worked through a structured analysis of which values were non-negotiable versus which positions were more flexible. The result was a recalibrated approach that maintained core commitments while reducing political exposure.

Implementing values as an operating system requires several elements that most organizations lack. First, radical specificity about what values mean in practice. "Integrity" is meaningless until you define what integrity requires when faced with specific dilemmas. Second, public commitment from leadership to make decisions based on values even when costly. Third, accountability mechanisms that reward values-aligned decisions and punish values violations regardless of short-term results.

The challenge is that these elements expose hypocrisy. Most organizations prefer vague values statements precisely because they provide rhetorical cover without operational constraint. Converting values from aspiration to operating system means accepting that you'll sometimes forego profitable opportunities because they conflict with principles.

The Leadership Imperative: From Personality to Purpose

The research Heichler synthesizes points toward a fundamental reconception of leadership development. The question isn't "How can I be more authentic?" It's "What values will guide my leadership, and how will I ensure my actions align with those values even when difficult or costly?"

This shift has profound implications for how we select, develop, and evaluate leaders. Personality assessments and emotional intelligence inventories are useful tools, but they're insufficient. We need frameworks that help leaders articulate core values, recognize when they're facing values conflicts, and make principled decisions under pressure.

I've developed a set of questions I use with executives facing complex leadership challenges:

These questions force leaders past justification and rationalization toward honest examination of whether their actions align with their stated values. The discomfort this creates is productive. Values alignment shouldn't be easy or automatic. It requires continuous, conscious effort.

The Winter 2026 MIT Sloan Management Review issue Heichler introduces provides frameworks for this effort across multiple domains. The authentic jerk research offers a values-based alternative to personality-focused authenticity. The quiet activism framework helps leaders maintain principle while managing risk. The responsible AI implementation analysis shows how to operationalize ethics. The algorithmic pricing examination highlights where technology and values collide.

What unites these frameworks is a rejection of easy answers. Being an effective leader in 2025 and beyond isn't about expressing your authentic personality, issuing statements about social issues, adopting AI ethics principles, or deploying algorithms. It's about the much harder work of identifying what you truly value, building organizations that operationalize those values, and making decisions that align with principles even when costly.

Moving Forward: Practical Steps for Values-Aligned Leadership

For individual leaders, start with serious introspection about core values. Not what you think you should value or what sounds good in a LinkedIn post, but what you're genuinely willing to sacrifice for. Write them down. Be specific about what they mean in practice. Then audit your actual decisions over the past six months against those stated values. The gaps you find are your development opportunities.

For organizations, stop treating values as marketing copy. If your company claims to value innovation, how do you respond when innovative projects fail? If you value diversity, what percentage of senior leaders represent diverse backgrounds, and what's your trend line? If you value sustainability, what's your Scope 3 emissions trajectory? Values without metrics are aspirations, not commitments.

Create accountability mechanisms that matter. This means promoting and compensating leaders based partly on values alignment, not just results. It means publicly acknowledging when you've fallen short of stated values and explaining what you'll do differently. It means giving people throughout the organization permission and protection to raise concerns when they see values violations.

Regarding AI and algorithmic decision-making specifically, establish governance structures before deploying systems, not after. Assign clear accountability for ethics and risk management. Provide people in those roles with genuine authority to block or modify projects. Invest in training so technical teams understand the ethical implications of their work and ethics teams understand enough about technology to engage meaningfully.

On corporate activism, make deliberate choices about which issues connect to your core business and values versus which don't. You don't need to take public positions on everything. But where you do have genuine commitments, back them with resources and operational changes, not just statements. Patagonia's ownership restructuring is expensive and irreversible. That's what makes it credible.

The authenticity trap is believing that being yourself is sufficient. It isn't. Leadership requires becoming someone whose self is worth following. That person isn't found through personality assessments or personal branding exercises. That person is built through the daily practice of identifying values, making values-aligned decisions, acknowledging when you fall short, and doing better next time.

This is harder than authentic self-expression. It's less comfortable than vague values statements. It requires more courage than public activism without operational follow-through. It demands more discipline than adopting AI without governance. But it's the only approach that builds genuine trust, manages emerging risks, and creates organizations worth leading and following.

For more insights on aligning leadership with core values, you can explore further here: How Leaders Stay True to Themselves and Their Stakeholders.