Tuesday, July 31, 2007

Status July 31

Several validation runs have started, and results are only just becoming available.

For 2004, two experiments have begun, one starting in January 2004, and the other starting in July 2004. These will run forward through 6 months to provide a full year of analysis. So far, the first months (January and July) have recently completed, and results for those months should be available soon.

January 2006 was run, completed, and evaluations begun. However, between the time it was started, and its evaluation, it was determined that the initialization of the satellite bias corrections was a problem. A new experiment has begun and should be ready for evaluation soon. This will also serve as a test on our bias initialization procedures, which need to be done each time a new satellite becomes available.

The problems with initialization of the satellite bias corrections became apparent in evaluating the SSMI experiment, discussed in the July 10 status update. The NOAA 10 data began using a bias correction that was determined from a previous experiment, with different physics, and hence, different biases. Large parts of the SSMI experiment are being re-run, including some half degree experiments and data withholding experiments. In addition, new coefficients for the historical NOAA radiances are available. Presently, these are being tested. It is worthwhile to mention that the science in the system has been frozen, and these tests are more related to the input data for the 1980s.

File Spec: Thanks to those who have taken time to look at the document! Reviews and intercomparisons of the doc with the output data have identified some inconsistencies. The developers are well into resolving the discrepancies. A new file spec document should be posted soon.

The initial conditions and input data for spin up runs for the MERRA production streams are being developed. When operations personnel get some time, these will be started so as to not lose time while we validate. This expects a favorable result from the validation of the system. The spin-up and steams for MERRA production will be posted separately.

AMS abstracts are due August 10, and the WCRP reanalysis conference abstracts are due today. There will be several GMAO presentations at each of these on MERRA validation.

Thursday, July 19, 2007

Pressure levels intersecting the surface

Many modern atmospheric numerical models use terrain following vertical coordinates, meaning that the pressure of the lowest model level tracks the topography and does not intersect the surface. ERA40 and NCEP reanalyses have produced pressure level data extrapolated downward beneath the Earth’s surface. The result is that for 850, 925 and 1000 mb levels etc, continuous grids are available. Previous versions of GEOS models and assimilation systems have not extrapolated data beneath the surface, favoring to provide undefined values when the surface pressure is lower than a given pressure level.

For instantaneous analyses, comparing GEOS5 pressure levels to other reanalyses would be straight forward, once the undefined value is considered. However, monthly averages pose a problem. There are some regions and pressure levels where the number of valid values may be available for a fraction of the times. If all valid values of GEOS5 are averaged and reported, the average would not be representative or comparable to NCEP or ERA40 reanalyses which made averages of all times.

Figure 1 850 mb temperature RMS error between GOES5 and NCEP analyses for different criteria of the sampling of missing data in the GEOS5 time series. At the left of the graphs, lower criteria allow undersampling of the monthly time series to be compared with NCEP complete monthly mean. Far right, rejects points that have missing data in the time series, so there are fewer data points, but the comparisons to NCEP are more completely sampled. (Click figure to enlarge)

This can lead to an increase in the squared error and systematic bias between GEOS5 and other reanalyses because of the temporal sampling at the edges of topography. This is also noticeable in global and regional map comparisons. We computed global monthly averages testing a range of criteria for rejecting a monthly average. The criteria are applied at each grid point and are based on the percentage of valid data over the month. In Figure 1, on the far left, if data are valid only 1% of the time during a month, a valid monthly mean value is saved. Moving right, at 20%, a grid point with valid data 20% of the month produce a monthly mean (fewer than 20% are reported as undefined). At the farthest right, the strictest criteria requires that for each gridbox to produce a monthly average much have gridpoints that have valid data 100% of the time. The two figures are global land only and North America (20-70, -170--60). At higher pressure, there are more points affected by sub-sampling, and the errors are most noticeable in these large area averages. For higher altitudes, the large scale error drops slowly for criteria greater than 20% (more points valid 100% of the time).
Figure 2 Comparison between GEOS5 and NCEP for different criteria, and a map of the sampling percentage. At 20% criteria (data is valid only 20% of the month) large differences are apparent. These are reduced at 80%. At 100% the data should be showing only differences between full monthly averages, no effect of sampling. There are some artifacts because these figures have interpolated NCEP to the GEOS5 ½ degree resolution. Differences near topography can be significant and misleading (to one not knowing about the character of the data). (Click figure to enlarge)

To address this issue in the monthly mean MERRA products, only means which include counts that exceed a threshold of 20% valid data are included in the mean. Otherwise, the monthly mean value is reported as undefined. This low value is defined to provide as much information as possible. The monthly mean 3D pressure files will also include a variable that counts the valid data at each pressure level. The data user can then screen data to suit their needs. This can also be used to screen other data sets for comparison purposes, and also zonal averaging.

One difficulty that may arise is the lack of a 1000-500 mb thickness diagnostic. This was produced in some previous versions of GEOS5. However, in revising the pressure level interpolation code for MERRA, the calculation of 1000 mb height has been left out, and so, 1000-500 mb height is not available. Also, consider that the 1000 mb analyses will have undefined data over large areas of the globe (land and ocean). Lowest model level data are also available that may be suitable for some purposes, instead of the 1000mb level.

Tuesday, July 10, 2007

Status July 10

The January 2006 validation experiment is underway. This period has been rerun enough in recent weeks that much of the preparatory work had already been done. The primary validation experiment for 2004, is still being prepared. Scripts, code and data need to be in place and working together, soon was the latest update.

There has been much work on the mid- to late- 80s experiment (referred to as the SSMI experiment). Originally, this was established to investigate the impact of the availability of SSMI observations in July 1987 on the time series. The experiment was initialized in Mid-Dec 1983, and run almost through 1990. The spatial resolution is coarse (2 x 2.5), and the version of the system is slightly behind the expected version for MERRA validation (a subsequent test shows that the physics/statistics difference do not change the main results of the SSMI experiment).

The main points to be discussed on the SSMI experiment are:

1. Global time mean precipitation bias
2. Water cycle time series
3. Impact of SSMI

1.) As stated on the previous post, global time mean precipitation, where the SSMI experiment (version b10p9 read - beta 10 patch 9) is 2.2 mm/day compared to 2.6 mm/day for GPCP and ~3mm/day for JRA25 and ERA40. An experiment at the full resolution and fallback MERRA version of GEOS5 (b10p14) shows that the precipitation to be ~0.2mm/day higher than the coarse resolution experiment. Most of this increase is a result of the spatial resolution. Much of the difference in the precipitation is over the tropical oceans where reanalyses are typically much to high. As the system stands now, global mean precipitation is lower than GPCP in the mid to late 80s.

2.) The time series of the SSMI experiment also showed some features that are currently being investigated more closely. Figure 1 shows the time series of precipitation anomalies (mean annual cycle from 1984-1987 removed) for the GEOS5 SSMI experiment, JRA25, ERA40 and GPCP.
Figure 1 (click on the figure to expand it)

Both GEOS5 and ERA40 show a decreasing tendency in the precipitation from 1984 throught the end of 1985. In Jan 1986, ERA40 tendencies reverse and start increasing. In Nov 1986, GEOS5 drops sharply, but stabilizes. NOAA10 data begins in Nov 1986, SSMI begins in Jul1987, NOAA11 begins in Jan 1989. The two issues being investigated are the sudden downward jump of precipitation with NOAA10, and the slight downward tendency early in the experiment. Also the introduction of SSMI is noticeable in the ocean only average (and increase for GEOS5, and a decrease for JRA25).

Figure 2

Figure 2 shows the time series of GEOS5 monthly analysis increments of water vapor (the incremental analysis updates that drive the diagnostic ouput, such as precipitation) at 4 levels in the lower troposphere for latitudes 60S-60N. At the lowest model level (not shown) the shipborne observations of moisture lead to positive increments almost every where and when. Above the surface layer, the lower tropospheric analysis is largely negative increments, acting to dry the atmosphere. The negative increments appear correlated to the precipitation anomalies, though there is also a period between Jul1985 and Jan1988 where TPW increases (see Figure 3). A large jump in the increments and precipitation (Figure 1) is associated with the introduction of NOAA10 (and shutdown of NOAA6 MSU). These are the focus of some ongoing evaluations.

First, an experiment with the latest MERRA system has been run over the start of NOAA10, and it shows less sensitivity in the increments than Figure 2. However, the precipitation in these experiments is similar. To test strictly the sensitivity of the system to NOAA10, a new experiment is being run forward but constraining the NOAA10 observations to passive mode (an analysis is made, but the increments will not contribute back to the system). Secondly, NOAA NESDIS is generating new coefficients for the historical periods polar orbiting satellites (to which we are grateful). It is not clear at this point what the impact of that will be, but will be thoroughly tested prior to MERRA production. Lastly, bias corrections are being made in the system (e.g. for view angle). The procedures for initializing and carrying these bias corrections are being reviewed. This is one possible source of error, but it is not yet clear a problem exists.

In summary for point 2, it seems we have some sensitivity to the observing system (regarding precipitation) on the same order of magnitude as previous reanalyses in the 80s. We are using this opportunity to flesh out an problems in the system that may be exacerbating the discontinuity of the analysis during observing system changes. These tests are intended as checks on the system before production, though, we expect that there will be noticiable changes in the MERRA time series as a function of the observing system.

Figure 3 Monthly anomalies from the Jan84-Dec87 mean annual cycle.

3.) The impact of SSMI was not immediately apparent in GEOS5 global precipitation (Figure 1), and ocean only average precipitation increases. The SSMI impact on evaporation and surface wind speed over the ocean is also apparent (Figure 3), likely related to the SSMI wind speed. The GEOS5 anomalies and JRA25 seem to be tracking really closely. In the mean, JRA is ~0.2m/s higher wind speed than GEOS5 (of 4.4m/s). So, it seems that in some ways GEOS5 is sensitive to SSMI, like JRA. However, in ocean precipitation, GEOS5 is more sensitive to the NOAA transitions than JRA (Figure 1).

Just a caveat regarding these results. In comparisons of the 2 degree resolution with 1/2 degree, the monthly precipitation increases at finer resolution, especially in mid-latitudes. We have not yet run multiple years of the 1/2 degree system, and don't have a grasp on the interannual variations. It should be interesting to see how similar the coarse and fine resolution analyses are over long periods.


Figure 4 Zonal time series of precipitation anomalies (after ENSO removal) for JRA and the GEOS5 SSMI experiment. The impact of SSMI on JRA is apparent in the southern hemisphere (30-60S). GEOS5 low frequency decreasing precipitation is focused in the tropics.