You can hone your crystal ball–gazing skills to better forecast healthcare trends.

The best forecasters read widely about all kinds of developments, even about trends that have nothing to do with their fields of expertise.

You need to become a better futurist. You need to up your game at least to “prosumer” level.

Why? Because every decision you make, whether about your organization’s strategy or your own career, is based on a view of the future, an implicit or explicit story about where things are headed. And that view is flawed. Any view of the future is flawed.

The question is not whether we can eliminate the flaws and predict the future. We can’t. The question is whether we can hone our vision to at least tell us something useful about the future, about what is more likely to happen, what is more likely to make a difference, and what would be the leading indicators of which way it will go.

So, one reason you might consult with a futurist or hear one speak is to hear what he or she thinks about the future — but the other reason is to hear how he or she thinks about it. Because you need to think about your future and your organization’s in a much more fine-grained way.

How do you do that? I could just say, “Here’s how I do it.” Because it’s not magic. I mean, I do have three crystal balls right here on my desk, but they are non-operational. They are here to remind me of the key to good thinking about the future: insight. That is, not so much numbers and extrapolating trends, but creating narratives based on thoughtful insight, then testing the narratives by asking such questions as: What would have to be true for this to happen? What would be the signs that we are headed in this direction? Is this happening anywhere yet?

Evidence-based future forecasting

Some people have done the research on forecasting. They found numbers of people who could forecast better than our intelligence services, without any special access to secret information, and then studied how they work. Let’s take a brief look at what they found, and look at how it might apply to our thinking about healthcare.

After the shock of 9/11, U.S. intelligence services did a lot of introspection, asking themselves how they could improve their work. One of the things this resulted in was a crowd-sourcing experiment, the Good Judgment Project, sponsored starting in 2011 by the Intelligence Advanced Research Projects Activity. A multi-disciplinary team from the University of California, Berkeley and the University of Pennsylvania put together a process in which random people would sign up to be forecasters and get graded on their success.

The forecasters could be anybody: professional statisticians, bus drivers, professors, checkers at Wal-Mart — didn’t matter. They would be given a set of questions from which they could choose to make predictions on, things like, “What are the chances that in the next six months there will be a coup in Liberia?” The questions were time limited, framed so that they would have definite answers, and neither too easy nor impossible. The forecasters would make a prediction based on whatever information they could find. They could update the prediction as often as they liked, but each update was treated as a new prediction.

The result? Over a year, most people did pretty well and better than random. A small group, though, did astonishingly better. That group was given their own discussion forum, and followed for another year as the experiment continued. Over that second year, some “reverted to the mean,” doing no better than most people. Their great score in the first year seemed to have been mostly luck. But some of them did spectacularly better the second year as well.


As the experiment continued, some people stayed in that most successful group year after year. These were the “superforecasters.” The researchers studied them to see if there was anything about their success that could be emulated. It turned out on close inspection that they did have certain ways of working in common — and they were not what you might expect. Those methods form the meat of the new book Superforecasters: The Art and Science of Prediction by Philip Tetlock and Daniel Gardner.

When I read the book, I found that the methods of the superforecasters turned out to be not at all surprising. In fact, they were a pretty good match for the way I think about the future. There is a lot to those methods, but let me try to summarize them. Principally, they were not about mathematically extrapolating trends. They were about building narratives.

The best forecasters read widely even when they weren’t working on a problem. Their working model of the world was well constructed with a wide variety of inputs. They would cast a wide net, often wider than the inside experts in that field might cast. (I study, for instance, among many other things, why shipping traffic is increasing past the Cape of Good Hope, and why growing numbers of barges are being laid up on the Mississippi.)

In considering a problem, they would formally or informally construct a scenario, a story about that thing happening, putting in place what would have to happen for that to be true. Then they would sift all the data they could find to prove or disprove that story or to modify it.

The problem with statistical forecasting

The problem with quant-based trend-following techniques of forecasting is that the devil is not in the details. The devil is in the hidden assumptions you are making when you decide which trend to extrapolate, and which numbers reliably encapsulate the problem you are researching.

In choosing the key numbers and trends, you are automatically narrowing the kinds of data that can influence your model — and healthcare especially is an ineluctably complex system, exhibiting all the features predicted by the science of complexity. These assumptions built into choosing key number and trends can be rooted out only by a discipline of creating specific scenarios incorporating a broad array of data, then testing those scenarios against more data.

The importance of not knowing

An important part of the discipline is a dogged “suspension of belief,” adding in the right amount of “I don’t know,” of “living in the question.” And this is a principle way in which the experience of the superforecasters is interesting and instructive but insufficient for our purposes. For the purposes of the test, they of course had to make predictions, and they would find out if they were right at a date certain — and being right was the prize.

In the real world, forecasting is only one piece of strategy design, and the rest of the design influences how you think about forecasting. The questions are not abstract; they are about “What do we do now to prepare for the future?” And our organizations and careers depend on getting the answers right — or at least right enough to power some useful thinking. If we lock down our opinions too soon, we often miss the next big thing that could change that opinion or give us a deeper insight into the problem.

The question for the superforecasters was just, “What can we know about this particular future? With what degrees of probability?”

For our purposes, we have to add further questions, such as, “What do I really need to know? How soon do I need to know that in order to act on it in a meaningful way? What would be interesting to know but not useful to drive strategy? What will we really need to know, but not until a year from now, when we will have more information?”

So, forecasting for driving strategy is different from pure forecasting. Here are some parameters for forecasting for strategy:

  • Treat forecasting as a serious discipline, a key part of your continual strategy planning effort. On the other hand:
  • Live in the question as long as possible. Keep your mind open until you have to take action — and even then, keep your mind open, watch the signs and be ready to shift if necessary.
  • Build capacity in advance of the need as often as possible. For instance, if you imagine going into the on-site clinic business, start one now as a pilot to build your capacity. Organizational capacity — the workforce, expertise and experience that will be needed — is the hardest to build. Second hardest is finding the necessary capital and/or partners and affiliates to back the plan. Physical plant, the sheetrock and shelves and machines, is much more malleable. Some things take years to ramp up to full capacity, so start on suspicion, start at least something that will begin to expand your organizational experience and capacity in that direction.
  • Stake out territory ahead of time using these pilots and forays, by building capacity and engaging with possible clients and partners.
  • Seek out need. Don’t just run scenarios on the businesses you are currently in. As part of your forecasting, seek out specific needs and construct scenarios in which you could provide a solution for which someone might pay you, even if it’s not someone you are used to thinking of as a payer. Think, “Okay, here’s a need to which we could provide a solution. For whom else is this a problem? The state? Centers for Medicare & Medicaid Services? Area employers? Possible distant clients in a ‘medical tourism’ model? Could we provide a better and cheaper solution to their problem?”
  • Build generalized reserve capacity as much as possible, organizational capacity, financial reserves, bonding capacity, as well as general networking and affiliation strengths. Much of the recent consolidation in the industry is driven by this need to simply be bigger, in order to have reserve capacity to deal with unanticipated change. For smaller and rural organizations, this is the compelling argument for affiliation, if not outright sale to some larger organization or network: As things shift, sometimes radically, sometimes more quickly than imagined, smaller organizations that have been operating close to the line often do not have the reserve capacity they need to survive. Such affiliation need not be outright sale, but it has to have an interdependent form that puts others at some risk for your survival.

Are you taking this seriously?

If you’re not, that’s fine. You don’t have to worry about it. Organizations and individual healthcare leaders and managers who don’t take it seriously won’t have to worry about it at all for too long because they most likely will find the whole problem taken out of their hands.

Those who don’t continually try to parse the future and shift their strategy based on their shifting vision of the future will find the next several years to be a real struggle. Those who take seriously the discipline of thinking about the future in a sophisticated way will be much better armed and prepared for that future.

This article was first published in the American Hospital Association’s Hospitals and Health Networks Daily on May 23, 2016.