Weather radar works by emitting microwave radiation into the sky and then listening for the signal that’s reflected back. It’s a meteorological game of Marco Polo.
All sorts of targets reflect the microwaves – raindrops, snowflakes, hailstones, bats, airplanes, and even swarms of insects. How well a given target reflects microwaves depends on its composition, size, and shape. For instance, liquid water is a better reflector of radar energy than ice.
When a meteorologist looks at a radar display, she’s seeing the reflected signal from all those targets in a given slice of sky. The radar doesn’t “know” which piece of reflected energy came from a bird and which piece came from the hailstone that moments later cracked your car windshield. The radar simply aggregates the reflected signal. It’s up to the meteorologist to interpret the results.
Until just a few years ago, the National Weather Service’s network of weather radars collected information about only two quantities: the reflected energy from a given section of sky (reflectivity) and the velocity of the targets within that section (mean radial velocity and spectrum width). In complex meteorological situations like winter weather events or severe storms, these two pieces of information provide only an incomplete picture of the type of precipitation that’s falling. When you’re just looking at reflectivity and velocity data, for instance, it can be difficult to tell the difference between hail and heavy rain. Yet on the ground, knowing the difference can be critical.
Enter dual-polarization radar technology. If you’ve ever owned polarized sunglasses, you’re already familiar with the principle of polarization. The short-n-sweet version is that electromagnetic waves (like radio waves emitted by radar or visible light waves emitted by the sun) can be oriented along a certain axis.
Tilt your head from side to side while wearing polarized sunglasses, and you’ll notice that the image you see changes – the color of the sky darkens and lightens, glare off the pavement appears and disappears. As you tilt your head, you’re actually changing the polarization of the light that’s being let through your sunglasses, and that gives you additional information about the world around you.
The same is true with weather radar. Conventional radar sends out radio pulses polarized only in the horizontal direction, so the reflected signal carries only 1-dimensional information. Dual-polarization (or “dual-pol”) radar, on the other hand, sends out both horizontally polarized pulses and vertically polarized pulses, so the reflected signal carries 2-dimensional data.
This may seem rather trivial until you consider that precipitation types have characteristic shapes. Small raindrops are spherical, while big raindrops flatten out like a Frisbee. Hailstones are roughly spherical when they’re dry but can become oblong as their outer layers melt. The two-dimensional data provides invaluable insight into what types of precipitation are present within a storm.
Here in Florida, we don’t have to worry too much about winter weather, but hail is another matter. In the lightning capital of the United States, thunderstorms are part of the scenery for much of the year, and most thunderstorms, if they are strong enough and reach high enough into the atmosphere, produce hail.
But that hail doesn’t always reach the ground. In warm, moist atmospheres, hail melts as it falls toward the ground. If the hail starts out small or if the freezing level is high in the atmosphere, hail can melt completely before reaching the ground. Dual-pol radar data can reveal whether a storm is producing hail aloft, and by examining radar data at different heights within the storm, meteorologists can determine whether and how much that hail is melting before it reaches the surface (and people’s cars and houses).
Dual-pol radar adds three more tools to the meteorologist’s kit. Each of these tools provides unique information about the size, shape, and mixture of precipitation types within a storm.
Correlation Coefficient (CC)
Correlation coefficient measures how similarly the returned horizontal and vertical pulses are behaving. It’s like looking at the world under a strobe light. From one flash to the next, how much does the image change? When the targets within a given region are of the same shape and type (for example, all medium-sized raindrops), one pulse will look much like the next, and the correlation coefficient will be high. If, on the other hand, precipitation types are mixed (like rain and hail swirling together), correlation coefficient values will be lower. Generally, the larger the hail, the lower the correlation coefficient.
Differential Reflectivity (ZDR)
Differential reflectivity compares the reflectivity values returned in the horizontal and vertical directions, like comparing how much the image through your polarized sunglasses changes as you tilt your head. Targets that are wider than they are tall (like large raindrops) have higher differential reflectivity – they reflect more horizontally polarized energy than vertically polarized energy. Hailstones, on the other hand, are more spherical and tend to tumble as they fall, reflecting roughly equal amounts of horizontally and vertically polarized energy. Hail typically has low to near-zero ZDR values.
Specific Differential Phase (KDP)
Specific differential phase is a bit more complicated than correlation coefficient and differential reflectivity. Physically, KDP measures the phase shift of the returned horizontal and vertical signals. In practice, this means that specific differential phase responds to both the shape and the density of liquid water targets. Frozen precipitation, like dry hail and snow, do not contribute to KDP – KDP “ignores” frozen precipitation and sees only liquid precipitation. Specific differential phase is therefore useful for determining rainfall rate.
As part of the dual-polarization upgrade, National Weather Service weather radars now incorporate an algorithm that estimates precipitation type from the dual-pol variables discussed above. Numerous automated hail report websites use the National Weather Service algorithm or a custom one to identify regions of hail. While such algorithms provide a useful first-pass to identify regions within a storm where hail is likely being produced aloft, they do not provide information about whether that hail is reaching the ground and at what size.
When Blue Skies Meteorological Services investigates the presence of hail for a forensic meteorology case, we don’t just run an algorithm and depend on the radar to “know” what was happening in the storm and to assume what was happening on the ground. We examine official storm reports, severe weather warnings and advisories, the atmospheric profile, and dual-polarization radar data at multiple heights and throughout the lifetime of the storm to reconstruct a comprehensive picture of the weather situation – both high in the storm and on the ground, where it matters.
Typically, forensic meteorology is applied to weather events a few years old at most – Did damaging hail really strike that commercial facility in April of last year? Was it a tornado or a microburst that ripped off roofs and uprooted trees last week? Did a lightning strike start that house fire a few months back?
Occasionally, though, forensic meteorologists look decades or even centuries into the past. Such has been the case with Tropical Cyclone (TC) Mahina, which struck Bathurst Bay, Australia, on 5 March 1899 as a Category 5 storm. Since the publication of a research paper by H.E. Whittingham in 1958 titled “The Bathurst Bay Hurricane and associated storm surge” reported that TC Mahina produced a storm surge of 13 meters (or over 42 feet), that storm has generally been credited with the largest ever-recorded storm surge.
Contemporary accounts of the storm reported a waist-deep wall of water inundating a 40 ft tall ridge where several law enforcement officers were camped during the storm, dolphins being found stranded atop 15 m high cliffs after the storm had passed, and fragments of Aboriginal canoes being deposited 70 to 80 feet above the normal high tide.
TC Mahina was a undoubtedly a monster storm. With sustained winds of over 175 mph, it sank 54 ships (mostly pearling vessels) and killed more than 300 people, sweeping devastation across the Bathurst Bay region of northeastern Australia.
Yet despite its impressive statistics and a number of contemporary (albeit generally third-person) accounts of storm-related inundation, meteorologists have long regarded the 13 m storm surge record skeptically. It just didn’t seem possible. The commonly reported central pressure of 27 inches of mercury (914 mb), while extremely low, just doesn’t support a storm surge as high as a four-story building.
Previous studies that used computer models to estimate storm surge given the most likely track and intensity of the cyclone over this topographically complex region had come up well short of 13 m, and field work in the region had not found evidence of debris deposits to the reported 13 m height.
Something was amiss – either the central pressure was lower than 27 inHg or the storm surge wasn’t actually 13 m high. Possibly both.
Despite the contradictory evidence, little research had been done to set the record straight, until recently. In the May 2014 issue of the Bulletin of the American Meteorological Society (BAMS), several Australian scientists revealed the results of their forensic analysis of TC Mahina’s storm surge.
In that analysis, they utilized methods that forensic meteorologists often use to evaluate much more recent weather events: they investigated historical records, examined the physical evidence, and modeled the event. What they learned is that, as is often the case, the devil is in the details.
Previous modeling studies had relied upon a thirdhand account of Mahina’s central pressure published in an anonymously authored report several months after the cyclone made landfall. That central pressure – 27 inches of mercury (or 917 mb) – was simply too high to produce a 42 foot storm surge.
By combing through the historical record, though, the authors of the May 2014 study found several references to the storm’s central pressure as an astonishing 26 inHg (880 mb). All of those accounts ultimately were traceable to the same man – a ship captain whose schooner was the only vessel to experience – and survive – the eye of the cyclone. That captain, William Field Porter, also happened to write a letter to his parents recounting his harrowing experience. In it, he stated plainly that “the barometer was down to 26 [inches of mercury].”
So it would seem that the central pressure that had been used in previous modeling studies – studies that had failed to reproduce anything nearing a 13 m storm surge – had been too high. Perhaps the lower pressure of 26 inHg would create the reported record storm surge?
Before jumping straight into the modeling, though, the authors also re-examined the physical evidence – debris that was washed up and deposited by the storm. They found wave-deposited sandy sediments 6.6 meters above mean sea level at Ninian Bay, the location where law enforcement officers reported the 13 m storm surge, but they found no evidence of inundation above that.
This doesn’t necessarily rule out the 13 m water level, however. During tropical cyclones, the highest debris tends to consist of biological material that floats, like leaves, sea grasses, and small marine animals. This material also tends to biodegrade after a few years or decades, leaving no trace for forensic meteorologists peering 115 years into the past. Observations after more recent tropical cyclones suggest that sandy sediments can be deposited at only half the height of maximum inundation, so it is quite possible that the water did reach a height of 13 m above mean sea level at Ninian Bay, where those sandy deposits were found at 6.6 m.
Once the authors had concluded that there was a decent probability that the storm’s central pressure really was 26 inHg and that the waves at Ninian Bay really did reach 13 m above mean sea level, they set about to model the storm surge based on a range of storm forward speeds and storm tracks suggested by ships’ wind and pressure recordings as well as damage assessments after the storm passed.
What they discovered was that even in a worst-case scenario (Mahina approached Bathurst Bay from the northeast with a central pressure of 26 inHg), the storm surge would “only” be about 9 m.
And here is where the devil is in the details. You see, there’s a difference between storm surge and maximum inundation. Storm surge is the abnormal rise of water generated by a tropical cyclone, over and above the natural (astronomical) tides. Click here for an animation of storm surge in an area of steep topography, similar to Bathurst Bay. Storm surge is influenced by the size of the storm, winds speed within the storm, the forward speed of the storm as it approaches land, the angle of approach to the coast, the topography of the sea floor and coast, and the storm’s central pressure.
Maximum inundation, on the other hand, is just what it sounds like – the maximum height that water reaches above mean sea level. Maximum inundation is influenced by the height of the storm surge, the timing of astronomical tides, and various types of wave action like wave setup and wave run-up. It’s the high water mark.
And the high water mark is almost always higher than the storm surge. In fact, in severe tropical cyclones in northeastern Australia, wave and tidal effects have added approximately 25% to the height of maximum inundation.
What this means for Tropical Cyclone Mahina is that the 1899 accounts of a monster cyclone that brought the sea to the top of a 40 ft high cliff may in fact have been accurate. If the central pressure was actually 26 inHg and the storm approached from the northeast, it could have generated a storm surge of up to 9 meters (30 feet). Mahina struck during astronomical high tide, and that combined with wave setup and run-up could have added an additional 4 m (12 feet) of water on top of the storm surge.
So, the record for highest storm surge may have to be revised downward (Mahina’s storm surge was probably 9 meters or less), but its maximum inundation may still take the gold. It is entirely possible that on March 5th, 1899, men waded through seawater atop a 40 ft high cliff and dolphins swam through the tops of 50 ft tall trees.
Blue Skies Meteorological Services offers forensic meteorological analyses of a wide range of weather events — from hail storms to lightning strikes, from flooding to tornadoes, from fog to sun glare. We typically don’t look 115 years into the past, but we’re always up for unique and interesting challenges! Give us a call or send us an email to discuss your weather-impacted legal case, insurance claim, or investigation.
Having grown up in Oklahoma, in the heart of Tornado Alley where annual violent twisters are just part of the springtime scenery, even I was initially a bit surprised when I heard of a new report out of the Southeast Regional Climate Center (SRCC) at the University of North Carolina. According to research by Charles Konrad II and his team at the University of North Carolina (UNC), the state in which tornadoes kill the most people per mile tracked on the ground is not Oklahoma or Kansas, not Texas or Arkansas or Mississippi – it’s Florida.
Now, Florida is no stranger to tornados. In fact, per square mile, Florida has more tornados than any other state in the country. But they’re usually not violent tornadoes – not like the EF5 monsters that ripped through Joplin, MO, in 2011 and through Moore, OK, in 1999 and 2013.
The vast majority of violent tornadoes are spawned by long-lived supercell thunderstorms, and weather patterns in Florida just don’t support those sorts of storms. Instead, Florida typically experiences weaker tornadoes, often spun up by interactions with the Gulf Coast and Atlantic sea breezes or by tropical cyclones. These tornadoes can cause substantial damage (e.g. roofs and siding removed, trees uprooted, cars flipped), but it’s not the sort of damage that one usually thinks of as causing widespread loss of life.
And therein lies the initial – but not necessarily warranted – surprise. When we think about risk, we tend to oversimplify the equation. We tend to assume that exposure = risk. We figure that the bigger, badder, and more frequent the hazard, the more people are likely to be harmed by it. By that reasoning, the southern Plains and the Deep South should have the deadliest tornadoes. Those are, after all, the regions of the country that experience the highest frequency of strong tornadoes. In other words, that’s where the greatest exposure per square mile is.
But that’s not where the highest density of tornado-related deaths occur. According to Konrad and his team, that dubious honor – greatest number of deaths per mile along the track of a tornado – goes to Florida.
To understand why, we have to look at the real risk equation.
Risk = Exposure x Vulnerability
Exposure per square mile is only part of the story. Sure, you have to have tornadoes on the ground for people to be killed by them – but you also have to have people in the path of the tornado who lack the appropriate resources to protect themselves.
To understand why Florida’s risk for tornado deaths is so high, we can compare it another state with almost exactly the same average number of tornadoes per square mile: Kansas.
According to the SRCC study, the number of deaths per mile along tornado tracks is nearly five times higher in Florida than in Kansas. Yet, while Florida and Kansas experience almost the same number of total tornadoes per square mile, tornadoes in Kansas are, on average, stronger than in Florida.
So, why isn’t Kansas at the top of the list? The answer has to do with population density and population vulnerability.
The number of people in the path of the tornado is maximized when tornadoes form and track over populated areas. In Florida, tornadoes tend to cluster along the populous Atlantic coast and along a stretch of Intersate-4 from Tampa to Orlando.
The population density in these regions ranges from about 300 – 1000+ people per square mile. By contrast, only one county in Kansas has a population density above 1000 people per square mile, and the vast majority of the state has a population density below 50 people per square mile. In fact, the average population density of Florida is more than ten times greater than that of Kansas.
So, when a tornado touches down in Florida, it’s much more likely to encounter people along its path.
There are also a number of demographic factors that make Floridians more vulnerable to tornados than Kansans.
This study out of UNC reminds us that risk assessment often has more to do with human systems and the built environment than with the natural hazards themselves. Risk exists in that intersection of exposure and vulnerability – exposure is largely a matter of where we live, while vulnerability is largely a matter of how we live. Effective risk mitigation requires understanding and addressing both.
Blue Skies Meteorological Services can help businesses identify their exposure and vulnerability to weather and climate impacts so that risks can be effectively targeted and reduced while resiliency is simultaneously built into operations.
3150 days, give or take a few. That’s how long it’s been since a major hurricane, defined as a Category 3 or higher storm, has made landfall in the U.S. The previous record was about 2250 days, almost 2.5 years shorter. Although the time between major hurricane landfalls has varied significantly since 1900, it’s been about 500 days (or every 1-2 years) on average.
So, are we really due?
It’s hard not to think so. After all, if we flip a coin 8 times (for the last 8 years in which the US escaped a strike by a major hurricane) and all 8 come up heads, we start thinking, “It’s bound to come up tails next time.” But we’d be wrong (well, unless the coin was rigged). Statistics just don’t work that way. Each coin flip has the exact same 50/50 probability of heads/tails, regardless of the pattern of results that came before. So the fact that we did not suffer a major hurricane landfall last year or the year before does not in any way influence the probability of a landfall this year.
What does influence the probability of a landfall is the number of storms that form and the atmospheric steering flow that guides those storms toward or away from the US coastline. This year, an El Niño pattern is expected to form during the summer or early fall, bringing warmer waters to the eastern equatorial Pacific Ocean, and, among other things, stronger vertical wind shear, stronger trade winds, and greater atmospheric stability to the Carribean and tropical Atlantic Ocean. Strong vertical wind shear inhibits tropical cyclone development, as it tends to rip nascent storms apart before they have the opportunity to organize and develop, and enhanced atmospheric stability does just what it sounds like – stabilizes the atmosphere and hinders storm formation. For these reasons, moderate to strong El Niño years are often associated with below-average hurricane activity in the Atlantic basin.
In addition to the predicted development of El Niño later this year, Atlantic sea surface temperatures (SSTs) in the main tropical cyclone development region are expected to remain slightly below average throughout the June 1 – November 30 hurricane season. Tropical storm systems draw their energy from the warm waters over which they develop – cooler water means less energy and, generally, fewer and less intense storms.
These two major factors – the expected development of El Niño and cooler sea surface temperatures in the tropical Atlantic – have led most hurricane forecasters, including NOAA, to predict an average to below-average hurricane season for 2014.
Given the lack of a major hurricane landfall in the US during the last 8 years and the below average 2014 Atlantic hurricane season forecast, the most dangerous part of this year’s hurricane season may be complacency. We would do well to remember that it only takes one storm to create devastation (like Hurricane Andrew in 1992 that struck during an otherwise quiet season) and that even non-major hurricanes can bring widespread destruction (Hurricane Ike in 2008 and Sandy in 2012 come immediately to mind).
Both Ike and Sandy go to show that the sustained wind speed of a tropical cyclone (and therefore its Saffir-Simpson category) does not solely determine its destructive potential. The physical size of the storm is also a critical determinant of its storm surge, and water kills far more people and destroys far more property than wind.
The danger that storm surge poses to life and property is often poorly understood outside of the meteorological community (despite the well-publicized tragedy and horror brought by Hurricane Katrina’s storm surge in 2005). To address this common knowledge gap, the National Weather Service will begin issuing experimental Potential Storm Surge Flooding Maps for the U.S. East Coast and Gulf Coast during the 2014 hurricane season. For each hurricane that is forecast to make landfall, the storm surge maps will show the geographical areas where storm surge could occur as well as how high above ground level the water could reach in those areas. The maps are intended to provide a reasonable estimate of the worst case scenario for flooding in those areas that could be impacted by an approaching storm.
Last week’s National Hurricane Preparedness Week, highlighted the many dangers associated with tropical cyclones, including storm surge, inland flooding, and wind. These hazards, especially inland flooding, wind, and severe thunderstorms, can affect locations hundreds of miles from the coast, so hurricane preparedness isn’t just for those folks lucky enough to live where most of us only vacation. Chances are, even if you don’t live near the coast, you have friends or family who do. Both the National Hurricane Center and Ready.gov offer excellent resources related to understanding and preparing for tropical cyclones.
Many coastal states, including Florida, Louisiana, and Virginia, offer sales tax holidays when you purchase hurricane preparedness supplies at the start of the hurricane season. For those of you in Florida, like us here at Blue Skies, that sales tax holiday runs through this upcoming Sunday, June 8.
We’re hoping for a season as quiet as the forecast, but even so, we’re gathering our storm supplies and reviewing our plan. We hope you’re doing the same!
All good things must come to an end. After almost four months of relatively quiescent weather, the 2014 tornado season kicked off quickly and tragically over the weekend.
On Friday evening, the year’s first intense tornado (defined as an EF3 or stronger) touched down in Chowan County North Carolina, killing an 11-month old child who was trapped beneath the debris of his home. That storm brought to an end two record-breaking streaks of benign weather, marking both the latest calendar date for a year’s first EF3 tornado as well as the latest calendar date for a year’s first tornado death.
Only two days later, on Sunday, April 27th, an outbreak of severe storms spawned multiple tornadoes that killed 16 people in Oklahoma and Arkansas. The most substantial damage occurred in central Arkansas, where an 80-mile-long path of destruction swept through northern Little Rock, leaving damage reportedly indicative of an EF3 or stronger tornado. The same slow-moving severe weather system hammered Mississippi, Alabama, and Tennessee on Monday and is expected to continue bringing dangerous weather, including the possibility of strong tornadoes, to the southeastern US through at least Wednesday.
Although intense tornadoes are relatively rare, accounting for approximately 5% of all tornadoes nationally, they are responsible for a disproportionate 75% of all tornado fatalities (statistics for North Carolina). While each tornado fatality is tragic, tornado deaths have been generally declining in the US since the 1920’s, with an average of 80 people killed each year by tornado activity.
Although the majority of tornado damage and fatalities are attributable to rare intense tornadoes, even much more common weak tornadoes and severe straight-line winds can cause substantial damage to property, felling trees, removing shingles and siding from homes, and flinging debris into structures and vehicles. Most homeowners insurance covers storm damage, including damage caused by wind, hail, lightning, debris, and falling trees. One notable coverage exception found in almost all insurance policies, however, is storm-induced flooding, including street flooding, storm surge, and areal flooding due to rising rivers, streams, and creeks. For such coverage, a separate flood insurance policy is required.
In some cases, though, street flooding is caused not by an exceptional storm (i.e. an “act of God”) but rather by an insufficient storm water drainage system. In such instances, liability for damages may rest with the planning or maintenance authority responsible for the storm water system, rather than with the homeowner.
If a neighborhood or section of a neighborhood regularly floods, even during normal, everyday storms, the drainage system may be deficient. A forensic meteorological analysis (like this one from BSMS) of known storm events that led to street flooding, considered in the context of the local rainfall climatology, can reveal whether the drainage system was adequately designed and maintained to handle foreseeable events.
In addition to flood damage, homeowners insurance will not cover damage caused by a lack of proper maintenance. Occasionally, negligence may be suspected as a contributing factor to storm damage, leading to a denial of claim, even when it is not immediately clear whether damage would have still occurred with proper maintenance.
For instance, if a tree falls during a storm and is later found to be rotten, the insurer may deny the homeowner’s claim, insisting instead that negligence on the owner’s part (failing to remove a rotten tree) caused the tree to fall, rather than the storm. Such insurance disputes can lead to nasty legal battles. Investigation as to whether the homeowner knew or suspected that the tree was rotten (i.e. whether he or she was on notice), examination of other damage throughout the area (did healthy trees of a similar size fall nearby during the same storm?), as well as a forensic meteorological analysis (were wind speeds with the storm sufficient to fell a healthy tree of that size? did heavy rainfall and saturated soils reduce the root stability of the tree?) can greatly assist in determining the ultimate cause of the damage and thereby assist in settling such disputes.
So, the bottom line is this: storm season is here, and after a late start, it appears to be making up for lost time (at least at the moment). Extreme weather, which includes severe local storms as well as tropical cyclones, droughts, heat waves, areal flooding, wildfires, and winter storms, causes tens to hundreds of billions of dollars in damage annually in the US.
Severe local storms are, on average, responsible for more than 10% of all damages, with tropical cyclones and droughts/heat waves responsible for nearly 50% and 25%, respectively. While severe local storms are not responsible for the largest percentage of damage costs, they do represent the most common/frequent type of extreme weather experienced in the United States, and almost everyone, at some point, will experience storm damage. Make sure you understand your property insurance policy, including any exceptions, and take care of any nagging maintenance issues (like rotten trees or loose roof shingles) that could jeopardize a storm-related insurance claim.
In the event that you do find yourself in a weather-related insurance or legal dispute, whether as the insured or the insurer, the plaintiff or the defendant, do not hesitate to contact Blue Skies Meteorological Services. We will gladly provide a complimentary consultation to discuss how a forensic meteorological analysis could determine the role that the weather played in your case and how such an analysis could facilitate an advantageous resolution of the dispute.
Next time: Weather-impacted automobile accidents