How to Measure Anything: Finding the Value of "Intangibles" in Business
Douglas W. Hubbard
A deep dive into why traditional risk assessment fails and how quantitative methods, like the Monte Carlo simulation, provide a more accurate and scientific approach to managing organizational uncertainty.

1 min 47 sec
Think about the last time you checked a weather forecast before planning an outdoor event. You looked at a percentage—perhaps a thirty percent chance of rain—and made a choice. That forecast was the result of incredibly complex mathematical models, massive data sets, and a rigorous scientific process. Because of that rigor, you can decide with a high degree of confidence whether to pack an umbrella or cancel the picnic entirely. For most of us, the stakes are low; a bit of rain is just an inconvenience. But in the world of global finance, infrastructure, and international security, the ‘weather’ is a volatile market or a potential natural disaster, and the stakes are measured in billions of dollars and human lives.
In these high-stakes environments, you might assume that the methods used to predict and mitigate disaster are as sophisticated as the ones used by meteorologists. Surprisingly, this is often not the case. Many of the world’s largest organizations rely on methods that are closer to guesswork than science. They use colorful charts and subjective labels that provide a comforting illusion of control while leaving the organization vulnerable to catastrophic failure. This is the core problem we are exploring today.
We are going to look at why current risk management is fundamentally broken and, more importantly, how we can fix it. We will dive into the history of how we measure danger, the psychological biases that cloud our judgment, and the powerful quantitative tools that can actually help us see what’s coming. From the battlefields of World War II to the boardrooms of modern corporations, the evolution of risk management is a story of moving from intuition to evidence. By the end of this journey, you’ll understand how to bridge the gap between ‘feeling’ safe and actually ‘being’ safe through better data and smarter analysis.
2 min 02 sec
Before we can fix risk management, we have to agree on what it actually is and how we measure the impact of unwanted events.
2 min 03 sec
The history of risk management moved from ancient survival tactics to the high-stakes mathematical calculations of wartime strategy.
2 min 06 sec
Popular scoring systems and vague labels often create more confusion than clarity, leading to dangerous misunderstandings of actual threats.
1 min 59 sec
Human intuition is a poor substitute for data, as even the most experienced experts are prone to overconfidence and memory errors.
1 min 53 sec
We can’t eliminate human bias, but we can train experts to recognize their own uncertainty through a process called calibration.
2 min 03 sec
The Monte Carlo simulation allows us to run thousands of ‘what-if’ scenarios, revealing the true range of possible outcomes for any decision.
1 min 50 sec
Even when data seems non-existent, we can calculate the odds of rare events by breaking them down into their smaller, measurable parts.
1 min 44 sec
Testing your model against reality and calculating the ‘cost of being wrong’ helps you decide if more data is worth the investment.
1 min 49 sec
Effective risk management requires breaking down internal barriers and creating a centralized library of standardized data.
1 min 28 sec
The world is only getting more complex, and the stakes of our decisions are only getting higher. We have seen that the traditional way of handling this complexity—relying on gut feelings, subjective scores, and colorful heat maps—is not only outdated but dangerous. These methods provide a false sense of security while ignoring the mathematical reality of probability and the very real flaws in human psychology.
The shift to a truly effective risk management strategy requires a fundamental change in mindset. It means moving away from vague labels like ‘medium risk’ and toward calibrated, quantitative models. It means recognizing that even the most seasoned experts are prone to overconfidence and training them to measure their own uncertainty. It means using the power of Monte Carlo simulations to run thousands of scenarios and uncover the hidden relationships between different threats.
Perhaps the most important takeaway is that nothing is truly immeasurable. By deconstructing complex systems and focusing on the variables that matter most, we can quantify risks that previously seemed invisible. The goal is not to eliminate risk—that’s impossible in any meaningful endeavor—but to be smart about the chances we take. When you use the right tools, you can allocate your resources effectively, protect your organization from catastrophic failure, and make decisions with the confidence of a scientist rather than the hope of a gambler. The tools are available; the only question is whether you are ready to stop guessing and start measuring.
The Failure of Risk Management serves as a critical wake-up call for leaders who rely on subjective scoring and gut feelings to protect their organizations. Douglas W. Hubbard argues that the most popular methods for assessing risk—like heat maps and qualitative scales—are not just imprecise, but often mathematically unsound, leading to a false sense of security. The book promises a path toward more rigorous, evidence-based decision-making. By exploring the history of risk analysis and the psychological traps that catch even the most seasoned experts, Hubbard offers practical tools for fixing a broken system. You will learn how to calibrate your own estimates, utilize sophisticated computer simulations, and measure even the most seemingly intangible risks.
Douglas W. Hubbard is the developer of a decision-analysis method known as Applied Information Economics. He is also the founder of Hubbard Decision Research and author of How to Measure Anything: Finding the Value of Intangibles in Business.
Douglas W. Hubbard
Listeners find this book to be a challenging and thorough resource for quantitative risk evaluation. Although perspectives differ regarding the author's harsh stance on alternative methods, many are convinced by the strong case for utilizing scientific modeling and Monte Carlo simulations instead of qualitative tactics. Furthermore, they value the insights into the shortcomings of standard tools like risk matrices, with one listener highlighting the book’s role as a useful "application guide" for methodical decision-making. Listeners also point to the functional upsides of expert calibration training in enhancing the reliability of probability forecasts.
If you've ever looked at a colorful risk matrix and felt like something was missing, this is your wake-up call. Hubbard provides a provocative and incredibly rigorous guide to why the standard 'heat maps' used by most corporations are essentially astrology. The truth is, most risk management is just guessing disguised as a process. I found the section on Monte Carlo simulations particularly illuminating because it offers a mathematical path out of the 'snake oil' sold by big consultants. While the author is definitely critical of the status quo, his arguments for scientific modeling over qualitative intuition are hard to ignore. It’s an essential application guide for anyone who actually wants to make disciplined decisions based on data rather than gut feelings. It's not always an easy read, but the payoff for your organization's bottom line is worth the effort.
Show moreHubbard basically takes a sledgehammer to the 'snake oil' of modern management consulting in this volume. It’s a dense but necessary exploration of why our current risk assessments are failing. I particularly appreciated the detailed breakdown of the 'Risk Paradox,' where the biggest decisions often get the least amount of rigorous analysis. The book pushes for a transition toward Loss Exceedance Curves and actual probabilistic measurements, which might scare off some readers who prefer 'simple' red-yellow-green grids, but that’s the point. Those simple tools give a false sense of security. The section on calibrating experts was eye-opening—it turns out you can actually measure and improve how people estimate uncertainty. If you’re tired of the 'not-invented-here' syndrome blocking real progress in your firm, use this book as your roadmap for a scientific overhaul.
Show moreFinally, a book that treats risk management as a science rather than an art form. As someone who works with ISACA and NIST standards, it was shocking to see Hubbard deconstruct them so effectively as 'worse than useless.' He argues that many of these codified standards are just spreading a dangerous virus of misinformation. The highlight for me was the concept of the Risk Tolerance Curve; it provides a way to unambiguously quantify what losses a company is actually willing to accept. No more vague 'high probability' labels that mean different things to different people. This is a rigorous, provocative guide that should be mandatory reading for anyone with a PMP designation. It’s not a light read, but if you want to move beyond corporate astrology and into real decision science, this is the text you need.
Show moreAs someone who deals with risk daily, this was a refreshing bit of iconoclasm. Hubbard takes the field to task for its reliance on 'snake oil' methods and provides a clear, math-based path forward. The idea that you can 'make up' data for a model and adjust as observations come in is a powerful counter-argument to those who say quantification is impossible for lack of data. I also loved the 'Equivalent Bet Test' for calibration—it’s a simple but effective way to gauge how confident an expert actually is. The book is definitely heavy on the criticism, but considering how broken most corporate risk processes are, a bit of a rant is probably justified. It’s easily one of the best business books I’ve read in years. If you want to actually understand the risks your organization is taking, read this.
Show moreAfter hearing so much hype about 'How to Measure Anything,' I jumped into this expecting a similar tutorial vibe. Instead, Hubbard takes a much more aggressive stance here, aiming his sights at the 'Four Horsemen' of risk management and various industry standards. He makes a compelling case against ordinal scales—like those 1-to-5 ratings—showing how they can actually lead to worse-than-random decisions. The technical depth regarding expert calibration training was the highlight for me; it’s fascinating to see how we can actually improve human estimation through feedback. My only gripe is that the tone occasionally veers into the smug territory, especially when he’s critiquing other experts like Taleb. Still, the core message about replacing intuition with probabilistic modeling is a powerful one that every C-suite executive should probably hear.
Show moreThe chapter on expert calibration was a total game-changer for my team’s workflow. Frankly, I didn’t realize how biased our internal estimates were until we started applying Hubbard’s techniques for repetition and feedback. The book does a fantastic job of exposing the flaws in popular tools like risk matrices, explaining how they can mistakenly prioritize smaller risks over catastrophic ones. However, I have to admit the writing is a bit dry and repetitive in places. He makes the same point about low-fidelity methods for 150 pages before getting to the real solutions. It’s a very handy work for security and risk analysts, but be prepared to trudge through some academic-level statistics and a fair amount of 'other statisticians are idiots' rhetoric. Overall, it’s a solid addition to the field for those with the patience for it.
Show moreEver wonder why big companies with massive risk departments still fail spectacularly? Hubbard’s answer is that they are using the wrong tools, and his logic is pretty airtight. He shows how simple rankings and colored grids create a 'false sense of control' that can be more dangerous than having no process at all. I found the discussion on 'first principles' and the need for measurable impacts on performance very compelling. While the audible version might be a lighter 'read,' the physical book is better for referencing the specific charts on Loss Exceedance. My only real criticism is that the author seems to enjoy the 'other people are idiots' narrative a bit too much. If you can get past the prickly personality, there is a wealth of actionable information here about scientific modeling and the power of Monte Carlo analysis.
Show moreTo be fair, Hubbard is clearly brilliant, but this book is an absolute slog to get through. It reads less like a management guide and more like a technical manual mixed with a polemic. He spends an inordinate amount of time attacking Taleb’s 'Black Swan' concepts and the turkey problem, which felt a bit like splitting hairs. The irony is that he critiques others for being abrasive while maintaining a tone that is incredibly smug. That said, the actual math—specifically the push for Monte Carlo simulations—is the right way to do things. I just wish he’d spent more time on the 'management' side of things instead of just 'analysis.' It lacks tangible implementation tools for the project managers on the ground who don't have a PhD in stats. A bit of a missed opportunity to reach a broader audience.
Show moreLook, I really wanted to like this book because the core premise—that we need more math in risk management—is sound. However, the delivery is so shrill and self-congratulatory that it’s hard to stay engaged. Hubbard spends what feels like half the book shouting about how everyone else is an idiot while contributing surprisingly little new material if you’ve already read his previous work. There’s a distinct 'low emotional intelligence' feel to his anecdotes about shaming clients into admitting they're wrong. He even takes odd, abrasive swipes at Nassim Taleb that felt more like a personal grudge than a scientific critique. If you want the actual techniques, just stick to 'How to Measure Anything.' This volume is mostly a 300-page rant against the industry that could have been summarized in a single white paper.
Show moreGotta say, I'm sorry, but I really disliked almost everything about this book's delivery. I am a big believer in quantitative risk management, but Hubbard’s arrogance makes it almost impossible to appreciate his points. He presents a very narrow view where the only valid purpose of risk analysis is decision support for senior executives, completely ignoring the employees who have to actually manage risks on the front lines. The book is bloated with redundant stories of him publicly shaming clients and 'I won't name names' diatribes. For a text that claims to be about the failure of risk management, it mostly declines to tackle actual management in favor of undergraduate-level economics. It felt like 90% rehash of his better books but with a much nastier attitude. Save your money and just buy a statistics textbook; you'll learn more and be less annoyed.
Show moreScott Galloway
Andrew Ross Sorkin
Yanis Varoufakis
Ian Goldin Chris Kutarna
AUDIO SUMMARY AVAILABLE
Get the key ideas from The Failure of Risk Management by Douglas W. Hubbard — plus 5,000+ more titles. In English and Thai.
✓ 5,000+ titles
✓ Listen as much as you want
✓ English & Thai
✓ Cancel anytime















