I once sat in a post-decision debrief where a leadership team spent an hour walking through the analysis that had led to a major strategic choice. The analysis was genuinely good — comprehensive, logically structured, grounded in real data. By the end of the hour, it was clear that the decision had followed from the analysis in a principled and coherent way. What became clear to me over the following six months, as I worked more closely with that team, was that the analysis had been commissioned specifically to validate a conclusion that had actually been reached weeks earlier, in a different room, by a subset of the team.
The conclusion was probably right. I'm not raising the story as an example of a bad decision. I'm raising it because it illustrates something important and rarely acknowledged in organizational life: the relationship between analysis and decision is frequently the opposite of what the formal process implies. The analysis often follows the decision rather than generating it. The formal decision-making process often ratifies a social and political process that happened before anyone opened a spreadsheet.
This is not corruption. It's how organizations actually work. Understanding it is not cynicism — it's the beginning of being able to make better decisions, because you can't improve a process you haven't honestly described.
The rational model and its valuable but partial truth
The rational decision-making model — gather information, generate options, evaluate options against criteria, select the best — is not wrong. It describes one important component of how good decisions get made, particularly in situations where the information is tractable, the options are enumerable, and the decision-maker is operating in relative isolation from organizational and social pressures.
MBA programs spend enormous energy teaching this model, partly because it's genuinely useful in those conditions and partly because it's teachable in a way that the messier realities of organizational decision-making are not. Students learn decision trees, expected utility calculations, Bayesian updating, game theory. They leave with a model of how good decisions should be made — a normative model, in the technical sense — that they can apply, and that does improve the quality of their individual reasoning.
Then they go to work, and discover that the model is dramatically incomplete. They see decisions made on incomplete information and intuition. They see analyses that start from conclusions and work backward to justify them. They see organizational politics trumping careful analysis. They see intelligent leaders making choices that no purely rational model would predict. The model isn't wrong about what it describes — it just doesn't describe most of what's actually happening.
The gap between the rational decision-making model and actual organizational decision-making is where most leadership difficulty lives. Not because organizations are uniquely irrational, but because the rational model says almost nothing about the three layers of decision-making that typically have more influence than the rational layer: individual cognitive biases, organizational incentive distortions, and social and political dynamics.
The behavioral economics layer — well-documented and under-applied
The behavioral economics revolution — Kahneman, Tversky, Thaler, and the researchers who followed them — has spent fifty years documenting the specific, predictable ways in which human judgment departs from rationality. The list is now long and well-established: loss aversion, anchoring, availability bias, confirmation bias, overconfidence, the sunk cost fallacy, the planning fallacy, peak-end bias, attribute substitution. Each of these has been replicated dozens of times across populations and contexts.
What's less often discussed in organizational settings is what these departures from rationality mean specifically for decision-making at the level of teams and organizations. The individual biases are well-documented. Their organizational implications are less mapped.
Consider confirmation bias — the tendency to seek out and weight information that confirms existing beliefs while discounting information that challenges them. In individual decision-making, it's a documented and real cognitive limitation. In organizational decision-making, it's amplified by structure. The leader who has already formed a view about a strategic direction will shape the information environment around them — which meetings they attend, which reports they commission, which voices they solicit — in ways that systematically generate confirming rather than disconfirming evidence. This isn't intentional. It's the bias operating through the organizational levers available to someone in a senior position.
Or consider loss aversion — the tendency to weight losses roughly twice as heavily as equivalent gains. At the individual level, it produces systematic risk aversion and reluctance to cut losses. At the organizational level, it produces something additional: the politically toxic nature of decisions that reverse previous commitments. A leader who reverses a public commitment isn't just experiencing personal loss aversion — they're navigating an organizational environment in which reversal is costly to status and credibility in ways that are independent of whether the reversal is correct. The organizational context doesn't just fail to correct for loss aversion; in many cases it amplifies it.
The incentive distortion layer — what the model systematically ignores
Individual cognitive biases are real and significant. The organizational incentive layer that sits below them is, in my observation, at least as influential on decision quality, and significantly less discussed in mainstream leadership development.
The incentive distortion layer works like this: organizations create incentive structures — formal and informal, financial and reputational — that shape the behavior of the people inside them. Those incentive structures frequently create systematic divergences between what's good for the individual acting inside the organization and what's good for the organization itself.
A person can privately doubt a decision they voted for in public, because overriding the room's apparent direction would have been socially costly. A person can know a project is failing while having strong incentives to report that it's on track, because their career advancement is tied to project success. A person can believe a hire is wrong while understanding that overriding their boss's enthusiasm would be relationship-damaging in a way that affects their own future. In each case, the individual's behavior looks irrational until you account for the organizational context — at which point it looks entirely rational, just optimizing for different objectives than the organization officially values.
This is what I was observing in the post-decision debrief I described at the opening. The analysis that followed the conclusion wasn't an accident of sloppy process. It was a rational response by the people involved to an organizational environment in which the most senior person had already signaled a direction, and the cost of challenging that direction was higher than the benefit of improving the analysis. The decision might still have been the same. But the quality of the deliberation that led to it was shaped by incentives that had nothing to do with getting the decision right.
What it actually means to be a more honest decision-maker
The leaders who navigate this reality best are not the ones who have somehow transcended human cognitive limitations — nobody does that. They're the ones who are honest about the limitations and who build practices that compensate for them.
They acknowledge that they have biases, rather than believing that biased thinking is something other people do. This acknowledgment is not therapeutic humility — it's a practical orientation that leads to useful behavior. If you believe you're not biased, you don't build processes to check your biases. If you believe you are biased — specifically and predictably biased in ways the research identifies — you build process friction between your initial judgment and your final commitment.
They create decision processes that introduce friction between intuition and commitment: requiring explicit consideration of the other options before deciding, commissioning analysis from people who weren't part of the direction-setting conversation, building in a structured interval between preliminary consensus and final commitment. These friction mechanisms don't eliminate bias, but they create opportunities for the bias to be visible and corrected.
They actively seek out information that would disconfirm their current view. Confirmation is cheap and emotionally satisfying — there's always more evidence available to support a position you already hold. Disconfirmation is expensive and uncomfortable — it requires looking for the case against yourself, engaging with it seriously, and updating your view when the case is strong. The leaders who develop this habit — who ask not "what supports my current view?" but "what would tell me I'm wrong?" — make significantly better decisions over time, because they're operating on a broader and more honest information base.
The organizational conditions that matter more than individual rationality
Perhaps most importantly, the leaders who make the best organizational decisions think carefully about the conditions that make good decisions more or less likely — which is a different question than how to improve their own individual reasoning.
A leader who has built a culture where people feel genuinely safe raising concerns will get better information than an equally smart leader who hasn't, because good decisions depend on the quality of the information that reaches the decision-maker. That information quality is a function of the organizational environment — specifically, whether the people who hold relevant information believe that sharing it is safe and valued.
The leader who has built clear decision rights — who has designed the organization so that it's explicit who makes which decisions and under what conditions — will make better decisions more efficiently than an equally smart leader who hasn't, because the ambiguity around decision authority is one of the most reliable generators of political behavior and incentive distortion.
The myth of the rational decision-maker is not just intellectually incorrect. It's practically harmful, because it leads leaders to focus on the quality of their individual reasoning rather than the quality of the system within which decisions are made. Building an organization that can think honestly about difficult things — that creates genuine space for dissent, that aligns incentives with truth-telling rather than with comfort, that maintains the conditions for real deliberation rather than performative consensus — is hard, mostly invisible work. It's also the work that most determines whether good decisions get made at scale.
Related: When to Trust Your Gut — and When Your Gut Is the Problem, Post-Mortem Culture: Learning from Decisions Without Manufacturing Blame
