Is Visualizing the Hurricane Problem Solvable?


“Superstorm” Sandy took the northeast by surprise last fall. However, it shouldn’t have. The unusual path, size and intensity of Sandy were well anticipated by computer forecast models and storm specialists at the National Hurricane Center.

NHC Forecast Swath for Sandy (2012)

Forecast swaths for all official forecasts made for Sandy. Darker tracks: most recent, lighter tracks: oldest

Note the graphic here showing the Official Forecast envelope (all forecasts issued) for Sandy – they were very tightly clustered and the turn back to the coast was captured well in advance.

However, in the aftermath of “Sandy”, many east coast residents expressed that they were taken by surprise by the record storm surge. Looking back, many individuals made poor decisions on when to evacuate and mitigate surge damage, despite the fact that the record storm surge was extremely well forecast days in advance.

Back in June, I had a great late-night Twitter exchange with Matt Wensing from Stormpulse regarding overall, broad-scale risk dissemination and translation, with a focus on how Sandy could have been better-approached using visual data.

Businesses and east coast residents had great information available regarding the risk of surge from Sandy, and forecasts were excellent…so how did the process break down?

To illustrate this point, I thought it would be interesting to visualize the data products available for Sandy 24 hours before landfall.

Back in June, I pulled in the “Surge Exceedance” GIS data set from the NHC servers from the 5:00PM advisory package on Sunday, October 28th. These data were available in the lead-up to Sandy’s landfall, updated when new forecast information came in. They were also freely available for the post storm analysis I was attempting, although as of 10/14/2013 it may be unavailable due to the US Government shutdown.

Surge information is relayed to users of the National Hurricane Center website through a tool called storm sure “Exceedence” guidance. This product expresses the chances of any location observing a storm surge meeting or exceeding a given height above sea level.

The Surge Exceedance product has 10 different options ranging from 10% to 90% chances, and it was/is my belief that meteorologists unfamiliar with the data structure either don’t know the surge products existed at the time, or don’t know how to translate them to actionable data for their viewers/customers.

To make these data actionable and clear, I just wanted one graphic (not 10), so as case study, I picked the 50% data-set. I felt that would be a compelling visual: people can act on data if they understand “there’s a coin-flip chance that the coast could experience a surge of 10 feet high in Atlantic City, NJ”. Here’s a snapshot of that chart for Monmouth county, NJ – again keep in mind this information was available a day before Sandy crossed the coast:

NHC Advisory 28, Expected Surge Exceedence for Sandy 1 day before Landfall

October 28th, 2012 forecast showing a 50% chance of a storm surge over a given height for Monmouth, NJ about a day before Sandy made landfall.

:

I posted this chart to Facebook on a Friday afternoon this spring, and within a few minutes, I picked up a few NY TV station meteorologist followers. More than one expressed they could not believe this information wasn’t made available before the storm. As it turned out, the information was available, it was just that people couldn’t find it.

The NHC is the source for all US tropical cyclone forecasts. They produce “products” designed to explain and translate hurricane forecasts for what they call “users”. Users of the site split into two general categories: the general public and emergency managers.

The general public is essentially the non-scientifically inclined average US citizen who is literate enough to use a computer, connect to the internet and know how to find the NHC on the web.

Emergency managers are exactly what the title suggests: managers trained, qualified and paid to keep their local communities safe, and to manage disasters before, during and after they happen.

This is where I think the system breaks down. Meteorologists are expected to be able to interpret and communicate the NHC forecast products. However, during a landfall situation, TV mets (for example) do not have a lot of time to hunt for data on the weather servers, which means they will take the most readily-available data and pass it on the best they can.

In this case, the TV met also doesn’t have time to read through a layer of text products to find surge data.

The NHC produces a cone of uncertainty, expected forecast track and maximum wind forecasts all in graphical form. However, the surge products have two big flaws:

1. It’s very difficult to determine how to use the maps. There are 20 different options with almost no instruction on how to use them (let alone about how to interpret them to people).

2. They virtually impossible to find and are several mouse clicks away from the NHC site landing page.

I believe Sandy highlighted a critical flaw in the hurricane forecast infrastructure. Until 1996, NHC technical forecast discussion (like this one for Sandy) wasn’t even available to the public. The information dissemination was carefully controlled in releases to local NWS offices and broadcast meteorologists.

Since then, the available information has been rapidly expanding, but risk communication has not kept pace with information quality. The scientific information is available, but it’s not translated well.

It is my belief that forecasts need to be turned into visual data. Visuals create “working memory” for people to keep track of geo-spatial data, which is a complex job. Most of the NHC visuals are either outdated or hard to find.

They get this. Dr Knab and the head of the storm surge unit at the NHC, Jamie Rhome, have been stressing the need for better surge awareness and risk communication, and they are working to adapt products that better explain hurricane risk.

Forecasters and people within the weather enterprise need to translate risk – not just understand how the atmosphere behaves. It is my belief that better visualizing data can help fill this gap, if it is done in a way which clearly defines the risk – and is easily understood by all users of forecast data. The quality of information needs to keep up with the quantity. How much money could have been saved if the storm surge graphic above was “front and center” before landfall? How many lives could have been spared?

These questions are impossible to answer now. However, we can work to find better ways to communicate that risk the next time a coastal population center is threatened with a powerful storm. It will happen again – we need to be ready.

Hurricane Analytics was started on this principle. Stormpulse is already doing this very well for weather risk across the board, but my idea and hope was to help change how hurricane-specific information is visualized, communicated and actioned. The 2013 Atlantic season has not allowed for a situation to test these concepts. Moreover, funding concerns and the lack of a “hurricane problem” this season have forced me to reevaluate this business model in order to survive (more on this soon).

I still think there is a huge United States hurricane information problem – but as long as the Atlantic isn’t producing hurricanes – I am concerned the lessons learned from Sandy will fade without a clear solution being found before the next significant threat evolves.


About Michael Watkins

Mike Watkins is the founder of Hurricane Analytics, a private organization specializing in data visualization and predictive analytics, with a special focus on tropical meteorology. They analyze complex meteorological data and communicate that information in easy-to-understand terms, to help clients prepare and anticipate the disruptive impact of Atlantic hurricanes.