Archive by Author | Carl Schreck

Featured Storm – Hurricane Cindy (2005)

Every hurricane season begins with an “A” storm and then moves through the alphabet. Even though the Atlantic Hurricane season is only a few weeks old, we’ve already seen Tropical Storms Andrea and Barry.

Cyclone Center is reliving the 2005 Hurricane Season. We started with Arlene and Bret, and now we’re working Cindy. Eventually we’ll get to some infamous storms like Katrina, Rita, and Wilma before we wrap up with Zeta.

We didn’t pick these storms to bring back memories of these terrible disasters. They actually play a key role in our project. The Atlantic is the only part of the world with routine aircraft reconnaissance. Hurricane Hunters from the U.S. Air Force and NOAA fly specially designed airplanes into these storms to find their centers and measure their strengths. We’ll be using their observations to calibrate the data we collect from your classifications. 2005 was a record-breaking year with 31 tropical storms, so we have a wealth of data!

Courtesy of the National Hurricane Center.

Courtesy of the National Hurricane Center.

Hurricane Cindy formed in the western Caribbean and moved northward across the Yucatan and the Gulf of Mexico on its way to Louisiana. Forecasters classified it as a tropical storm at the time, but it was later upgraded to a hurricane in the “best track” data. Cindy passed just to the east of New Orleans, causing minor flooding and widespread power outages. Little did anyone know that it was just a trial run for Katrina eight weeks later.

Hurricanes sometimes spawn tornadoes as the move inland. Cindy was associated with 33 tornadoes across the Southeastern United States. The strongest of these, and F2 on the Fajita scale, damaged the Atlanta Motor Speedway.

Courtesy of the National Weather Service, Peachtree, GA.

Courtesy of the National Weather Service, Peachtree, GA.

Log on to Cyclone Center today and classify Cindy.

Advertisements

Is it an Eye?

Identifying eye hurricanes is a main focus for the Cyclone Center team, but it is presenting some challenges for our citizen scientists.  Storms with real eyes are being categorized very well, with 80% to 90% accuracy in storm type.  However, storms that do not have eyes but look like they might are proving more difficult.  Many things, like blurry images or dark blue or white clouded centers have been shown to cause these mistakes.

A bizarre example of an image on Cyclone Center that is very misleading is the storm CYC1981.  The small white circle in the very center of the storm that at first glance looks like an eye is actually the island Niue in the South Pacific.  The satellite image was taken as the hurricane passed over the island, and the white land boundary lines look like the eyewall.

1981047S18181.CYC1981.1981.02.17.0900.47.GOE-3.030.hursat-b1.v05

CYC1981

Unfortunately, with the size of the images on Cyclone Center, it is hard to determine if this storm has an eye, and since having an island in the middle of a storm image is a rare phenomenon, it would be easy to assume that it did.  This image of CYC1981 even stumped most of our science team, so don’t feel bad if it tricked you too.

So how can we tell if a storm is or is not an eye storm?

Many storms look very similar in size and shape to eye hurricanes, but they lack an actual eye.  There are a few things that you can look for, however, to determine for sure whether or not the image you are looking at is an image of an eye storm.

  1. Is the center of the storm surrounding the eye cold?  You can tell this by the color of the clouds—shades of red, orange, and grey signify warm clouds while blue and white areas represent cold clouds.
  2. Is the eye itself warm?  The eye should be made up of warm clouds, usually grey or pink colored.  White and grey clouds are not one and the same; white clouds are very cold and grey clouds are very warm.

If we applied these three steps to CYC1981, we would find that it does have cold clouds at the center, but there are no warm clouds around where the eye should be and the band of clouds around the storm is very weak.

For more information, visit another recent post: How do I classify this? False eyes.

This post was contributed by Brady Blackburn, an intern with the Cyclone Center team from Asheville High School in Asheville, NC.

Tales from the road… AGU Fall Meeting

CycloneCenter.org was presented for the first time today at a major scientific meeting. This was the Fall Meeting for the American Geophysical Union (AGU), a gathering of over 20,000 scientists in San Francisco. The talk was in a session devoted to research by Citizen Scientists like you!

286645_471878492864322_1548046956_oA lot of other scientists were excited about the potential this Cyclone Center has. Together we’ll be able to answer some important questions about the climatology of tropical cyclones. And along the way, we get to interact with you! We also got some great suggestions on how we, the Science Team, can make it easier for you to interact with us. So keep a lookout for some big things we’ll be trying in the coming weeks!

Thank you for all your hard work on Cyclone Center and keep it up!

http://www.cyclonecenter.org

Cyclone Center’s Satellite Color Scheme

The Dvorak technique was developed in the 1970s and early 1980s. At that time, most satellite images were viewed on paper using black and white printers. To accommodate this medium, Dvorak developed the “BD Curve”. This curve assigned each satellite brightness temperature value to a specific shade of black, white, or gray.

The Dvorak technique relies on the analyst’s ability to identify each of these shades. Trained experts can usually do this relatively quickly. The BD Curve can be confusing, however, especially to newer analysts. Some colors are repeated, and it can be difficult to discern one shade of gray from another. We have developed a new full-color satellite enhancement for the Dvorak technique to address these issues. In addition to using this new color scheme for Cyclone Center, we plan to share it with tropical analysts around the globe.

The image above compares our color scheme with the BD Curve. Both schemes use gray shading to highlight clouds warmer than 9°C (48°F). The BD Curve then uses a second series of grays, while we give it a pink tint to help differentiate it from the warmer values.

Both color schemes use solid shades at varying intervals for temperatures colder than -30°C (-22°F). In our scheme, this begins with a dark red (which flows naturally from the pink). The colors become progressively less warm (orange, yellow, then shades of blue). Where the BD Curve is forced to repeat Medium Gray and Dark Gray shades, our colorized scheme is able to use unique colors throughout.

Note that the BD Curve uses black for temperatures from -63°C (-81°F) to -69°C (-92°F). This bold color marks a transition from moderate to tall clouds. This same transition is marked by the change from warm to cool colors in our scheme.

We have also included an additional color (white) for temperatures colder than -85°C (-121°F). This color is never used by the Dvorak technique, but it provides us additional information about the coldest clouds.

The images above show two views of Super Typhoon Gay (1992). The one on the left uses the BD curve; the one on the right is our color scheme. All features are identical in both color schemes, but we believe the colorized scheme makes them easier to identify.

We also wanted to ensure that our imagery could be easily interpreted by everyone, including people with color vision deficiencies. We were guided by the principles laid out by Light and Bartlein (2004). Specifically, we avoided any color scale that included both red and green. We also sought a scheme that varied in both hue and intensity. Our ultimate selection was inspired by the “RdYlBu” scheme from colorbrewer2.org.

The images above simulate how Super Typhoon Gay would appear to these users. These simulations are performed using vischeck.com. The one on the left simulates Deutarnopia, and the middle simulates Protanopia. These are both common forms of red/green deficiency. The image on the right simulates Tritanopia, a rare form of blue/yellow deficiency. These simulations suggest that any analyst, regardless of color deficiencies, would be able to identify the same features in our imagery.

How did we pick the images for each cyclone type?

In Cyclone Center, one of your first tasks is to: “Pick the cyclone type, then choose the closest match.” You may be wondering how we found the images that you’re matching against.

One of the first steps in the Dvorak technique is to determine the storm’s “Pattern” strength. It’s an initial estimate of the storm’s strength based on how the clouds are organized. Here are Dvorak’s original patterns:

Each of these patterns gets stronger as we move from left to right, similar to in Cyclone Center. We could have used these patterns in Cyclone Center. However, the strengths are irregularly spaced, and there are only two levels of strength for Eye storms. We chose instead to use real satellite images to identify each pattern.

Some of the most highly trained Dvorak analysts in the world work in the Tropical Analysis and Forecast Branch (TAFB) at the National Hurricane Center. To take advantage of this expertise, we sorted the satellite imagery from the Atlantic in 2003–2006 by the strengths and cyclone types that they assigned. We then chose representatives from each category based on these criteria:

  • Image quality
  • Similarity to the original Dvorak patterns
  • Representativeness of that image compared with others of the same strength and cyclone type
  • Continuum of strengths for a given cyclone type

The last criteria was particularly important since we wanted to show a clear progression from weakest to strongest in each cyclone type. So if you are ever debating between two images to select, remember that they go from weakest to strongest and see if that helps.

How do I know which storm appears stronger?

When Dvorak developed his method, he knew that you could tell something about a storm’s strength by looking at its lifecycle. If a storm looks stronger than it did yesterday, odds are that it probably is! That’s why the first step of most classifications is to ask which of two images look stronger. These are actually two images of the same storm taken within 24 hours. If the image you see is from the first 24 hours of the storm (or the image 24 hours prior is missing) then you’ll skip this step.

We’ll use your answer to calculate something called the Model Expected strength. It starts with the storm’s strength from 24 hours ago. If you say the newer one looks stronger, then we’ll bump it up a notch. If the older one looks stronger, then we’ll bump it down. And if they’re about the same, then we just hold it constant. This isn’t as sophisticated as some of the other ways we estimate strength (see the upcoming posts on the Detailed Classifications), but it gives us a good first guess.

A number of characteristics determine whether a storm is stronger, weaker or about the same. There are two main measures of strength to look for:

1. How cold are the clouds?

Colder colors in infrared imagery indicate taller clouds that release more energy into a storm. Stronger tropical cyclones tend to have taller clouds and more of them. For example:

  • The presence of more colder colored clouds in an embedded center suggest a stronger storm.
  • Colder clouds surrounding an eye suggest a stronger storm.

2. How organized are the clouds?

This question is a bit more subjective, so just give it your best shot. Some features that might indicate which storm image is stronger:

  • Stronger storms have spirals that wrap farther around the storm.
  • The cold clouds near the center become more circular as a storm strengthens.
  • Typically Shear and Curved Band storms are weaker than those with an Embedded Center.
  • Storms with an eye are almost always stronger than storms without one.
  • For storms with an eye, consider the shape, size and color of the eye. Eyes that are more circular, smaller and/or warmer tend to be associated with stronger tropical cyclones.

In some cases, the storm on the left may appear to have some of these characteristics, while the storm on the right may appear to have others. If this is the case, they can actually cancel out, in which case we would say that they are about the same. For example: If the storm on the left appears better organized and more tightly-wrapped, but the storm on the right has more cold colors, you would say that they are about the same.

You can use the images below to help you gauge a storm’s relative strength.

Introduction to Cyclone Center Images

The images you see on Cyclone Center were observed by infrared sensors on weather satellites. These sensors provide an estimate of the temperature at the tops of clouds. Cloud top temperatures are very important because they give us an idea of how tall the clouds are. Temperature decreases with height in the lower atmosphere (up to 10 miles), so cold clouds are taller than warm clouds. Taller clouds are responsible for the heavy rain and thunderstorms that drive tropical cyclones.

In the Cyclone Center images, the cloud top temperatures are represented by a range of colors. The scale on the image above shows the temperatures in degrees Celsius that correspond with each color.

Black and gray are the warmest, indicating temperatures from 9°C (48°F) to 30°C (86°F). Often these will be the temperatures we experience at the land or ocean surface, but they can also be associated with very low clouds. Shades of pink go down to -30°C (-22°F). In our images, these are almost always associated with low clouds. Red, orange, and yellow come next, and they indicate medium-level clouds.

In most images, the coldest clouds you see will be shades of blue. Sometimes you’ll even see a cloud that’s so cold it shows up as white. These clouds are colder than -85°C (-121°F). Coastlines and political borders are also drawn in white, so make sure the white clouds are surrounded by dark blue. Otherwise, you might just be looking at a small island.

Sometimes there is a problem with parts of the satellite data. These missing data will show up as black lines in the images. Just ignore them and carry on with the analysis when you see them.