What does a birthday mean? A major birthday – the type that warrants a card declaring your exact new age, possibly by spelling it out in macabre black balloons – what does it mean? Why do we care?
It’s not like you wake up on the morning of your birthday feeling dramatically older than when you went to bed. A decade’s worth of wrinkles don’t suddenly appear on your face. Yet you are older, and on your birthday, you are acutely aware of that fact.
A major birthday reminds you that life is short and you don’t have forever to act. It reminds you of all you’ve done and all you have left to do. Then it starts playing the Final Jeopardy countdown music in your ear. Time is ticking. Better get busy.
Reaching a global average carbon dioxide (CO2) concentration of 400 ppm is that type of milestone, and we passed it in March. To put 400 ppm in perspective, consider that maximum pre-industrial CO2 levels were 280 ppm and that 350 ppm is widely considered the upper limit to avoid truly dramatic climate change. Consider that CO2 levels haven’t been as high as 400 ppm in several million years, when the world was much hotter and the oceans much higher than they are today.
Yet, besides the climate scientists who marked the passing of 400 ppm with a mixture of dismay, anger, and sad resignation, few others seem to have noticed (well, besides the United States military who consider climate change a national security risk and key business and insurance leaders who are already taking action to adapt). Nationally and internationally, we’re certainly not getting busy.
It’s as if we believe that if we don’t acknowledge what’s happening, it won’t happen. As if staying in bed with your eyes closed on your birthday somehow stays the hands of time.
But time doesn’t stand still just because we avoid clocks and mirrors – just as CO2 concentrations continue to increase whether we acknowledge it broadly and publicly or not. Of course, the critical difference between the inexorable forward march of time and the increasing concentration of greenhouse gases in Earth’s atmosphere is that we can actually do something about greenhouse gas concentrations.
We very likely can’t undo what we’ve already done (the technology just doesn’t exist to capture and indefinitely store vast quantities of atmospheric CO2). But we can slow down and eventually stop emitting new greenhouse gases, if only we muster the foresight to recognize and the willpower to address a large, costly, complex, global problem that will only get larger, more costly, and more complex with each year of procrastinated action.
Failing to even acknowledge the passage of the 400 ppm milestone doesn’t bode well, though.
So what does 400 ppm mean? What is this new world we’ve created for ourselves and our progeny?
Well, for one thing, 400 ppm means we’ve committed to major climate change – to what we’re already experiencing and more. The average residence time of carbon dioxide in Earth’s atmosphere is hundreds to thousands of years, so even if we stopped emitting CO2 tomorrow, our climate would continue to warm toward a 400 ppm equilibrium.
Of course, we can’t put the brakes on instantaneously. If you’re traveling 100 mph down the highway and slam on the breaks, you keep traveling forward as you slow to a stop. A shift to renewable energy and carbon-neutral fuels, like stopping a speeding car, takes time, and the concentration of CO2 in the atmosphere will continue to increase during that shift.
Right now, though, we’re mashing on the accelerator rather than the brakes. With the exception of 1990-2000, each decade has seen an increase in the rate of CO2 emissions. Not only are we continuing to emit carbon dioxide – we’re emitting it faster and faster each year. If we continue along our current trajectory, we’re on pace for greater than 3° C warming, and that’s just the increase in average temperature. Extremes in both temperature and precipitation tend to increase more dramatically than their respective averages.
Such climatic changes would decrease crop yields and alter agricultural zones, decrease water availability while simultaneously increasing demand, inundate coastal areas with rising seas, extend the season and range of numerous pests and insect-borne diseases, increase heat stress and heat-related illness, and increase the frequency and intensity of flooding rainfall, among many other impacts.
400 ppm means that aspects of our environment that have been our touchstones for thousands of years – food and water availability, weather and climate – will shift in unprecedented ways. The ideal locations for cities, farmland, roads, factories, homes, and military assets will modify. Processes and procedures that have been reliable will become uncertain.
In short: the assumptions upon which we have built our societies may cease to be valid.
Although some progress toward mitigation (emissions reduction) and adaptation has been made on the local level both domestically and internationally, the sort of global-scale agreement and action required to alter our current emissions trajectory remains elusive. Emissions will therefore continue to rise, and the climate will continue to shift. Governments, industries, and individuals will be increasingly impacted by a variable and changing climate, and given the lack of coordinated effort to date, the unfortunate reality is that we must prepare to protect our own interests, assets, and welfare.
Businesses and insurers looking to take the long view of their investments, infrastructure, supply chains, and insured properties need to be aware of climatic changes that impact vulnerability. Blue Skies Meteorological Services is here to help these clients understand and mitigate their climate-related risk and exposure. Contact us at email@example.com for more information.
Weather radar works by emitting microwave radiation into the sky and then listening for the signal that’s reflected back. It’s a meteorological game of Marco Polo.
All sorts of targets reflect the microwaves – raindrops, snowflakes, hailstones, bats, airplanes, and even swarms of insects. How well a given target reflects microwaves depends on its composition, size, and shape. For instance, liquid water is a better reflector of radar energy than ice.
When a meteorologist looks at a radar display, she’s seeing the reflected signal from all those targets in a given slice of sky. The radar doesn’t “know” which piece of reflected energy came from a bird and which piece came from the hailstone that moments later cracked your car windshield. The radar simply aggregates the reflected signal. It’s up to the meteorologist to interpret the results.
Until just a few years ago, the National Weather Service’s network of weather radars collected information about only two quantities: the reflected energy from a given section of sky (reflectivity) and the velocity of the targets within that section (mean radial velocity and spectrum width). In complex meteorological situations like winter weather events or severe storms, these two pieces of information provide only an incomplete picture of the type of precipitation that’s falling. When you’re just looking at reflectivity and velocity data, for instance, it can be difficult to tell the difference between hail and heavy rain. Yet on the ground, knowing the difference can be critical.
Enter dual-polarization radar technology. If you’ve ever owned polarized sunglasses, you’re already familiar with the principle of polarization. The short-n-sweet version is that electromagnetic waves (like radio waves emitted by radar or visible light waves emitted by the sun) can be oriented along a certain axis.
Tilt your head from side to side while wearing polarized sunglasses, and you’ll notice that the image you see changes – the color of the sky darkens and lightens, glare off the pavement appears and disappears. As you tilt your head, you’re actually changing the polarization of the light that’s being let through your sunglasses, and that gives you additional information about the world around you.
The same is true with weather radar. Conventional radar sends out radio pulses polarized only in the horizontal direction, so the reflected signal carries only 1-dimensional information. Dual-polarization (or “dual-pol”) radar, on the other hand, sends out both horizontally polarized pulses and vertically polarized pulses, so the reflected signal carries 2-dimensional data.
This may seem rather trivial until you consider that precipitation types have characteristic shapes. Small raindrops are spherical, while big raindrops flatten out like a Frisbee. Hailstones are roughly spherical when they’re dry but can become oblong as their outer layers melt. The two-dimensional data provides invaluable insight into what types of precipitation are present within a storm.
Here in Florida, we don’t have to worry too much about winter weather, but hail is another matter. In the lightning capital of the United States, thunderstorms are part of the scenery for much of the year, and most thunderstorms, if they are strong enough and reach high enough into the atmosphere, produce hail.
But that hail doesn’t always reach the ground. In warm, moist atmospheres, hail melts as it falls toward the ground. If the hail starts out small or if the freezing level is high in the atmosphere, hail can melt completely before reaching the ground. Dual-pol radar data can reveal whether a storm is producing hail aloft, and by examining radar data at different heights within the storm, meteorologists can determine whether and how much that hail is melting before it reaches the surface (and people’s cars and houses).
Dual-pol radar adds three more tools to the meteorologist’s kit. Each of these tools provides unique information about the size, shape, and mixture of precipitation types within a storm.
Correlation Coefficient (CC)
Correlation coefficient measures how similarly the returned horizontal and vertical pulses are behaving. It’s like looking at the world under a strobe light. From one flash to the next, how much does the image change? When the targets within a given region are of the same shape and type (for example, all medium-sized raindrops), one pulse will look much like the next, and the correlation coefficient will be high. If, on the other hand, precipitation types are mixed (like rain and hail swirling together), correlation coefficient values will be lower. Generally, the larger the hail, the lower the correlation coefficient.
Differential Reflectivity (ZDR)
Differential reflectivity compares the reflectivity values returned in the horizontal and vertical directions, like comparing how much the image through your polarized sunglasses changes as you tilt your head. Targets that are wider than they are tall (like large raindrops) have higher differential reflectivity – they reflect more horizontally polarized energy than vertically polarized energy. Hailstones, on the other hand, are more spherical and tend to tumble as they fall, reflecting roughly equal amounts of horizontally and vertically polarized energy. Hail typically has low to near-zero ZDR values.
Specific Differential Phase (KDP)
Specific differential phase is a bit more complicated than correlation coefficient and differential reflectivity. Physically, KDP measures the phase shift of the returned horizontal and vertical signals. In practice, this means that specific differential phase responds to both the shape and the density of liquid water targets. Frozen precipitation, like dry hail and snow, do not contribute to KDP – KDP “ignores” frozen precipitation and sees only liquid precipitation. Specific differential phase is therefore useful for determining rainfall rate.
As part of the dual-polarization upgrade, National Weather Service weather radars now incorporate an algorithm that estimates precipitation type from the dual-pol variables discussed above. Numerous automated hail report websites use the National Weather Service algorithm or a custom one to identify regions of hail. While such algorithms provide a useful first-pass to identify regions within a storm where hail is likely being produced aloft, they do not provide information about whether that hail is reaching the ground and at what size.
When Blue Skies Meteorological Services investigates the presence of hail for a forensic meteorology case, we don’t just run an algorithm and depend on the radar to “know” what was happening in the storm and to assume what was happening on the ground. We examine official storm reports, severe weather warnings and advisories, the atmospheric profile, and dual-polarization radar data at multiple heights and throughout the lifetime of the storm to reconstruct a comprehensive picture of the weather situation – both high in the storm and on the ground, where it matters.
Typically, forensic meteorology is applied to weather events a few years old at most – Did damaging hail really strike that commercial facility in April of last year? Was it a tornado or a microburst that ripped off roofs and uprooted trees last week? Did a lightning strike start that house fire a few months back?
Occasionally, though, forensic meteorologists look decades or even centuries into the past. Such has been the case with Tropical Cyclone (TC) Mahina, which struck Bathurst Bay, Australia, on 5 March 1899 as a Category 5 storm. Since the publication of a research paper by H.E. Whittingham in 1958 titled “The Bathurst Bay Hurricane and associated storm surge” reported that TC Mahina produced a storm surge of 13 meters (or over 42 feet), that storm has generally been credited with the largest ever-recorded storm surge.
Contemporary accounts of the storm reported a waist-deep wall of water inundating a 40 ft tall ridge where several law enforcement officers were camped during the storm, dolphins being found stranded atop 15 m high cliffs after the storm had passed, and fragments of Aboriginal canoes being deposited 70 to 80 feet above the normal high tide.
TC Mahina was a undoubtedly a monster storm. With sustained winds of over 175 mph, it sank 54 ships (mostly pearling vessels) and killed more than 300 people, sweeping devastation across the Bathurst Bay region of northeastern Australia.
Yet despite its impressive statistics and a number of contemporary (albeit generally third-person) accounts of storm-related inundation, meteorologists have long regarded the 13 m storm surge record skeptically. It just didn’t seem possible. The commonly reported central pressure of 27 inches of mercury (914 mb), while extremely low, just doesn’t support a storm surge as high as a four-story building.
Previous studies that used computer models to estimate storm surge given the most likely track and intensity of the cyclone over this topographically complex region had come up well short of 13 m, and field work in the region had not found evidence of debris deposits to the reported 13 m height.
Something was amiss – either the central pressure was lower than 27 inHg or the storm surge wasn’t actually 13 m high. Possibly both.
Despite the contradictory evidence, little research had been done to set the record straight, until recently. In the May 2014 issue of the Bulletin of the American Meteorological Society (BAMS), several Australian scientists revealed the results of their forensic analysis of TC Mahina’s storm surge.
In that analysis, they utilized methods that forensic meteorologists often use to evaluate much more recent weather events: they investigated historical records, examined the physical evidence, and modeled the event. What they learned is that, as is often the case, the devil is in the details.
Previous modeling studies had relied upon a thirdhand account of Mahina’s central pressure published in an anonymously authored report several months after the cyclone made landfall. That central pressure – 27 inches of mercury (or 917 mb) – was simply too high to produce a 42 foot storm surge.
By combing through the historical record, though, the authors of the May 2014 study found several references to the storm’s central pressure as an astonishing 26 inHg (880 mb). All of those accounts ultimately were traceable to the same man – a ship captain whose schooner was the only vessel to experience – and survive – the eye of the cyclone. That captain, William Field Porter, also happened to write a letter to his parents recounting his harrowing experience. In it, he stated plainly that “the barometer was down to 26 [inches of mercury].”
So it would seem that the central pressure that had been used in previous modeling studies – studies that had failed to reproduce anything nearing a 13 m storm surge – had been too high. Perhaps the lower pressure of 26 inHg would create the reported record storm surge?
Before jumping straight into the modeling, though, the authors also re-examined the physical evidence – debris that was washed up and deposited by the storm. They found wave-deposited sandy sediments 6.6 meters above mean sea level at Ninian Bay, the location where law enforcement officers reported the 13 m storm surge, but they found no evidence of inundation above that.
This doesn’t necessarily rule out the 13 m water level, however. During tropical cyclones, the highest debris tends to consist of biological material that floats, like leaves, sea grasses, and small marine animals. This material also tends to biodegrade after a few years or decades, leaving no trace for forensic meteorologists peering 115 years into the past. Observations after more recent tropical cyclones suggest that sandy sediments can be deposited at only half the height of maximum inundation, so it is quite possible that the water did reach a height of 13 m above mean sea level at Ninian Bay, where those sandy deposits were found at 6.6 m.
Once the authors had concluded that there was a decent probability that the storm’s central pressure really was 26 inHg and that the waves at Ninian Bay really did reach 13 m above mean sea level, they set about to model the storm surge based on a range of storm forward speeds and storm tracks suggested by ships’ wind and pressure recordings as well as damage assessments after the storm passed.
What they discovered was that even in a worst-case scenario (Mahina approached Bathurst Bay from the northeast with a central pressure of 26 inHg), the storm surge would “only” be about 9 m.
And here is where the devil is in the details. You see, there’s a difference between storm surge and maximum inundation. Storm surge is the abnormal rise of water generated by a tropical cyclone, over and above the natural (astronomical) tides. Click here for an animation of storm surge in an area of steep topography, similar to Bathurst Bay. Storm surge is influenced by the size of the storm, winds speed within the storm, the forward speed of the storm as it approaches land, the angle of approach to the coast, the topography of the sea floor and coast, and the storm’s central pressure.
Maximum inundation, on the other hand, is just what it sounds like – the maximum height that water reaches above mean sea level. Maximum inundation is influenced by the height of the storm surge, the timing of astronomical tides, and various types of wave action like wave setup and wave run-up. It’s the high water mark.
And the high water mark is almost always higher than the storm surge. In fact, in severe tropical cyclones in northeastern Australia, wave and tidal effects have added approximately 25% to the height of maximum inundation.
What this means for Tropical Cyclone Mahina is that the 1899 accounts of a monster cyclone that brought the sea to the top of a 40 ft high cliff may in fact have been accurate. If the central pressure was actually 26 inHg and the storm approached from the northeast, it could have generated a storm surge of up to 9 meters (30 feet). Mahina struck during astronomical high tide, and that combined with wave setup and run-up could have added an additional 4 m (12 feet) of water on top of the storm surge.
So, the record for highest storm surge may have to be revised downward (Mahina’s storm surge was probably 9 meters or less), but its maximum inundation may still take the gold. It is entirely possible that on March 5th, 1899, men waded through seawater atop a 40 ft high cliff and dolphins swam through the tops of 50 ft tall trees.
Blue Skies Meteorological Services offers forensic meteorological analyses of a wide range of weather events — from hail storms to lightning strikes, from flooding to tornadoes, from fog to sun glare. We typically don’t look 115 years into the past, but we’re always up for unique and interesting challenges! Give us a call or send us an email to discuss your weather-impacted legal case, insurance claim, or investigation.
Having grown up in Oklahoma, in the heart of Tornado Alley where annual violent twisters are just part of the springtime scenery, even I was initially a bit surprised when I heard of a new report out of the Southeast Regional Climate Center (SRCC) at the University of North Carolina. According to research by Charles Konrad II and his team at the University of North Carolina (UNC), the state in which tornadoes kill the most people per mile tracked on the ground is not Oklahoma or Kansas, not Texas or Arkansas or Mississippi – it’s Florida.
Now, Florida is no stranger to tornados. In fact, per square mile, Florida has more tornados than any other state in the country. But they’re usually not violent tornadoes – not like the EF5 monsters that ripped through Joplin, MO, in 2011 and through Moore, OK, in 1999 and 2013.
The vast majority of violent tornadoes are spawned by long-lived supercell thunderstorms, and weather patterns in Florida just don’t support those sorts of storms. Instead, Florida typically experiences weaker tornadoes, often spun up by interactions with the Gulf Coast and Atlantic sea breezes or by tropical cyclones. These tornadoes can cause substantial damage (e.g. roofs and siding removed, trees uprooted, cars flipped), but it’s not the sort of damage that one usually thinks of as causing widespread loss of life.
And therein lies the initial – but not necessarily warranted – surprise. When we think about risk, we tend to oversimplify the equation. We tend to assume that exposure = risk. We figure that the bigger, badder, and more frequent the hazard, the more people are likely to be harmed by it. By that reasoning, the southern Plains and the Deep South should have the deadliest tornadoes. Those are, after all, the regions of the country that experience the highest frequency of strong tornadoes. In other words, that’s where the greatest exposure per square mile is.
But that’s not where the highest density of tornado-related deaths occur. According to Konrad and his team, that dubious honor – greatest number of deaths per mile along the track of a tornado – goes to Florida.
To understand why, we have to look at the real risk equation.
Risk = Exposure x Vulnerability
Exposure per square mile is only part of the story. Sure, you have to have tornadoes on the ground for people to be killed by them – but you also have to have people in the path of the tornado who lack the appropriate resources to protect themselves.
To understand why Florida’s risk for tornado deaths is so high, we can compare it another state with almost exactly the same average number of tornadoes per square mile: Kansas.
According to the SRCC study, the number of deaths per mile along tornado tracks is nearly five times higher in Florida than in Kansas. Yet, while Florida and Kansas experience almost the same number of total tornadoes per square mile, tornadoes in Kansas are, on average, stronger than in Florida.
So, why isn’t Kansas at the top of the list? The answer has to do with population density and population vulnerability.
The number of people in the path of the tornado is maximized when tornadoes form and track over populated areas. In Florida, tornadoes tend to cluster along the populous Atlantic coast and along a stretch of Intersate-4 from Tampa to Orlando.
The population density in these regions ranges from about 300 – 1000+ people per square mile. By contrast, only one county in Kansas has a population density above 1000 people per square mile, and the vast majority of the state has a population density below 50 people per square mile. In fact, the average population density of Florida is more than ten times greater than that of Kansas.
So, when a tornado touches down in Florida, it’s much more likely to encounter people along its path.
There are also a number of demographic factors that make Floridians more vulnerable to tornados than Kansans.
This study out of UNC reminds us that risk assessment often has more to do with human systems and the built environment than with the natural hazards themselves. Risk exists in that intersection of exposure and vulnerability – exposure is largely a matter of where we live, while vulnerability is largely a matter of how we live. Effective risk mitigation requires understanding and addressing both.
Blue Skies Meteorological Services can help businesses identify their exposure and vulnerability to weather and climate impacts so that risks can be effectively targeted and reduced while resiliency is simultaneously built into operations.
3150 days, give or take a few. That’s how long it’s been since a major hurricane, defined as a Category 3 or higher storm, has made landfall in the U.S. The previous record was about 2250 days, almost 2.5 years shorter. Although the time between major hurricane landfalls has varied significantly since 1900, it’s been about 500 days (or every 1-2 years) on average.
So, are we really due?
It’s hard not to think so. After all, if we flip a coin 8 times (for the last 8 years in which the US escaped a strike by a major hurricane) and all 8 come up heads, we start thinking, “It’s bound to come up tails next time.” But we’d be wrong (well, unless the coin was rigged). Statistics just don’t work that way. Each coin flip has the exact same 50/50 probability of heads/tails, regardless of the pattern of results that came before. So the fact that we did not suffer a major hurricane landfall last year or the year before does not in any way influence the probability of a landfall this year.
What does influence the probability of a landfall is the number of storms that form and the atmospheric steering flow that guides those storms toward or away from the US coastline. This year, an El Niño pattern is expected to form during the summer or early fall, bringing warmer waters to the eastern equatorial Pacific Ocean, and, among other things, stronger vertical wind shear, stronger trade winds, and greater atmospheric stability to the Carribean and tropical Atlantic Ocean. Strong vertical wind shear inhibits tropical cyclone development, as it tends to rip nascent storms apart before they have the opportunity to organize and develop, and enhanced atmospheric stability does just what it sounds like – stabilizes the atmosphere and hinders storm formation. For these reasons, moderate to strong El Niño years are often associated with below-average hurricane activity in the Atlantic basin.
In addition to the predicted development of El Niño later this year, Atlantic sea surface temperatures (SSTs) in the main tropical cyclone development region are expected to remain slightly below average throughout the June 1 – November 30 hurricane season. Tropical storm systems draw their energy from the warm waters over which they develop – cooler water means less energy and, generally, fewer and less intense storms.
These two major factors – the expected development of El Niño and cooler sea surface temperatures in the tropical Atlantic – have led most hurricane forecasters, including NOAA, to predict an average to below-average hurricane season for 2014.
Given the lack of a major hurricane landfall in the US during the last 8 years and the below average 2014 Atlantic hurricane season forecast, the most dangerous part of this year’s hurricane season may be complacency. We would do well to remember that it only takes one storm to create devastation (like Hurricane Andrew in 1992 that struck during an otherwise quiet season) and that even non-major hurricanes can bring widespread destruction (Hurricane Ike in 2008 and Sandy in 2012 come immediately to mind).
Both Ike and Sandy go to show that the sustained wind speed of a tropical cyclone (and therefore its Saffir-Simpson category) does not solely determine its destructive potential. The physical size of the storm is also a critical determinant of its storm surge, and water kills far more people and destroys far more property than wind.
The danger that storm surge poses to life and property is often poorly understood outside of the meteorological community (despite the well-publicized tragedy and horror brought by Hurricane Katrina’s storm surge in 2005). To address this common knowledge gap, the National Weather Service will begin issuing experimental Potential Storm Surge Flooding Maps for the U.S. East Coast and Gulf Coast during the 2014 hurricane season. For each hurricane that is forecast to make landfall, the storm surge maps will show the geographical areas where storm surge could occur as well as how high above ground level the water could reach in those areas. The maps are intended to provide a reasonable estimate of the worst case scenario for flooding in those areas that could be impacted by an approaching storm.
Last week’s National Hurricane Preparedness Week, highlighted the many dangers associated with tropical cyclones, including storm surge, inland flooding, and wind. These hazards, especially inland flooding, wind, and severe thunderstorms, can affect locations hundreds of miles from the coast, so hurricane preparedness isn’t just for those folks lucky enough to live where most of us only vacation. Chances are, even if you don’t live near the coast, you have friends or family who do. Both the National Hurricane Center and Ready.gov offer excellent resources related to understanding and preparing for tropical cyclones.
Many coastal states, including Florida, Louisiana, and Virginia, offer sales tax holidays when you purchase hurricane preparedness supplies at the start of the hurricane season. For those of you in Florida, like us here at Blue Skies, that sales tax holiday runs through this upcoming Sunday, June 8.
We’re hoping for a season as quiet as the forecast, but even so, we’re gathering our storm supplies and reviewing our plan. We hope you’re doing the same!