The fasDB Database
Description, Installation Guide, and API Reference

Jim Mundy (CSC)
Ann Schrader (SAIC)
Buck Sampson and Mike Frost (NRL MMD)
for Naval Research Laboratory, Marine Meteorology Division

16 August 2010


Table of Contents

1  Introduction

2  The fasDB Database

2.1  How fasDB Works

2.1.1  BUFR Data Flow

2.1.2  GRIB Data Flow

3  Installing the fasDB Database

4  Accessing Data within the fasDB Database

4.1  The getGRIBCatalog API

4.2  The GRIDCATALOG Structure

4.3   getGRIBCatalog Usage Example

5   Referenced Documents

6  Notes

6.1   Glossary of Acronyms

1  Introduction

This document describes the fasDB database, a flat-file database system for meteorological and oceanographic (METOC) data designed by Naval Research Laboratory, Marine Meteorology Division (NRL MMD) in Monterey, CA. It also provides instructions for installation of the database and the Application Program Interface (API) that allows applications to find data within it.

This document is organized as follows:

2  The fasDB Database

During the design phase for fasDB, care was taken to ensure that the end result was easy to implement, support, and expand. It requires no third party software, runs on Linux, and is so generic in its design that it should be useful for applications that require large amounts of METOC data.

The database can ingest and decode point data in WMO Binary Universal Form for the Representation of meteorological data (BUFR) format and gridded data in Gridded Binary (GRIB) format, storing the data in a form that is more readily accessible by client applications. It includes tables used in decoding the BUFR and GRIB inputs. Decoded data are stored in directories by date. Within each directory, individual files contain data categorized by data type, level, and date. Procedures are provided to automatically purge old data from the database after a time of residence that can be altered by the user, and to automatically archive data if desired.

2.1  How fasDB Works

The data flow within fasDB depends on the type of data being processed; there are separate data flows for BUFR and GRIB data. The overall flow is similar for each data type, however. A cron script is invoked periodically to scan an “incoming” directory for new files and route them to the appropriate processor. The script then checks to see if the preset time of day to conduct a purge has arrived. If not, it exits. If so, then it checks to see whether archiving is to be done, and if so, archives the data currently in the database prior to purging old data. The final step is to purge any data that have been stored longer than the set time to purge.

2.1.1  BUFR Data Flow

The cron script for BUFR data is processBUFR.ksh. Input parameters to this script are:

  1. baseDir – the path to the base directory for ATCF BUFR processing. The polling directory is then baseDir/incoming, the storage directory is baseDir/dataStor, and the log directory is baseDir/logs.

  2. logFileDuration – the number of days that log files will be retained. Defaults to 7 days.

  3. purgeHour – the clock time (Z) at which purging of old data from the database will take place. Defaults to 23 (2300Z).

  4. purgeTime – the maximum time in hours that data will reside in the database before being purged. Defaults to 96 hours.

  5. archiveFlag – tells the processor whether or not to archive old data before conducting a purge. Defaults to “1”, indicating archiving is normally to be performed. The archival capability is not completely implemented at this time.

Data may be received via any available feed. It is the responsibility of each ATCF installation to move BUFR data from the receipt directory for the feed to the incoming directory for the BUFR processor. The latter is normally the “incoming” directory beneath the baseDir in the processBUFR.ksh script.

The processBUFR.ksh script performs the following processing:

  1. Sets defaults, then parses the command line arguments, if any.

  2. Sets the polling, storage, and log directories, and the path to MEL BUFR tables used in decoding the BUFR data.

  3. Validates the command line arguments.

  4. Runs the ingester, a C program called processBUFR.exe. This takes as arguments:

    1. InDir – the polling directory,

    2. OutDir – the storage directory,

    3. LogDir – the log directory, and

    4. Debug – a debug flag.

  5. Establishes symbolic links to point to the “current” (latest) and “previous” storage directories.

  6. Checks whether the current clock hour matches the purgeHour. If so:

    1. Gets a list of all directories in storeDir, excluding those in the current, previous, and Archive directories. For each directory in the list,

      1. Determines how old the directory is and whether it’s past the purgeTime. If so,

        1. Checks whether the Archive Flag is set. If so,

          1. Makes an Archive directory if one doesn’t already exist.

          2. Moves the directory into the Archive directory.

          3. Tars and gzips the directory.

          4. Removes the directory from the Archive directory.

  7. Purges log files based on logFileDuration. Removes each file in the logs directory with a time less than the current time minus the logFileDuration.

2.1.1.1  Ingester Processing

As previously mentioned, processBUFR.exe is responsible for decoding BUFR data and storing the decoded data in the database.

2.1.1.2  Data Types

The following paragraphs describe the BUFR data types stored in the database. In these listings, the suffix <DATE> is replaced in the actual file names with the date of the data, in the format YYYYMMDD, where YYYY is the year, MM the month (01-12), and DD the day of the month (01-31); for example 20091102 denotes 2 November 2009. The database contains a separate file for each type of data, group of levels, and date. The sections below describe the file naming conventions and file and record structure for each data type. Each data file contains a header identifying the data type, levels, and date of the data it contains, plus a series of data records.

2.1.1.2.1  Aircraft Observations

File Name

Description

AC_obs_0ft_5000ft_<DATE>

Aircraft observations at levels between 0 and 5,000 feet for date <DATE>

AC_obs_5001ft_11000ft_<DATE)

Aircraft observations at levels between 5,001 and 11,000 feet for <DATE>

AC_obs_11001ft_21000ft_<DATE>

Aircraft observations at levels between 11,001 and 21,000 feet for <DATE>

AC_obs_21001ft_31000ft_<DATE>

Aircraft observations at levels between 21,001 and 31,000 feet for <DATE>

AC_obs_31001ft_41000ft_<DATE>

Aircraft observations at levels between 31,001 feet and 41,000 feet for <DATE>

Example of file structure:

METXAC5 200911020000 (0-5000 ft) File Header – data type, date, altitude range

GCRCTPZA 25.8 -80.3 008/079/001/10008 200911020408 Observation

Structure of a single observation:

Field 1: Aircraft ID (in the example, GCRCTPZA)

Field 2: Latitude (-90 to 90) (example: 25.8, i.e. 25.8N)

Field 3: Longitude (-180 to 180) (example: -80.3, i.e. 80.3W)

Field 4: Pressure in hPa divided by 100 (925 is represented as 925, 1012 as 012)(example: 1008 hPa)

Field 5: Temperature in degrees Fahrenheit (example: 79F)

Field 6: Altitude in 100s of feet (example: 100 ft)

Fields 7-8: Wind direction from true north (00) divided by 10(00-35) and wind speed in knots (example: wind from 100 degrees(east-northeast) at 8 knots)

Field 9: Date-time group of the observation, in format YYYYMMDDHHMM); example: 2 November 2009 at 04:08 AM.

2.1.1.2.2  Synoptic Observations

As these are surface observations, there is only a single file name: SYN_obs_<DATE>.

Example showing file structure:

METXSYN 200911020000 (12 HOURS WORTH) File Header – data type, date

SARF -26.2 -58.2 ** -9 -9 ** 003 084 069 0907 *** -9 **** 200911020000 Observation

Structure of a single observation:

Field 1: Call sign or WMO station ID (example: SARF)

Field 2: Latitude (-90 to 90) (example: -26.2, or 26.2S)

Field 3: Longitude (-180 to 180) (example: -58.2, or 58.2W)

Field 4: 2 stars

Field 5: Cloud coverage from code table 2700 in WMO 306 (example: -9 ; missing)

Field 6: Visibility (always set to -9 (missing)

Field 7: 2 stars

Field 8: Pressure in hPa divided by 100 (925 is represented as 925, 1012 as 012)(example: 1003 hPa)

Field 9: Temperature in degrees Fahrenheit (example: 84F)

Field 10: Dew point temperature in degrees F (example: 69F)

Fields 11-12: Wind direction from true north (000) divided by 10 (00-35) and wind speed in knots (example: winds from 090 degrees (East) at 7 knots).

Field 13: 3 stars

Field 14: Past weather code from WMO 306 code table 4561 (example: -9; missing)

Field 15: 4 stars

Field 16: Date-time group of the observation, in format YYYYMMDDHHMM); example: 2 November 2009 at 0000Z.

2.1.1.2.3  Altimeter-derived Significant Wave Height

These are measurements of significant ocean wave height derived from satellite altimeter observations. There is one file for each day. The file name is ALT-sig_wave_ht_<DATE>.

Example showing file structure:

METXALT 200911020000 ASC (12 HOURS WORTH) File Header – data type, date

NOTE: Ignore the (12 HOURS WORTH) in the header – each date-labeled file contains 24 hours of data.

261 46.48 -106.56 55 22 200911020008 Observation

Structure of a single observation:

Field 1: Satellite identifier (example: 261)

Field 2: Latitude (-90 to 90) (example: 46.48, or 46.48N)

Field 3: Longitude (example: -106.56, or 106.56W)

Field 4: Wind direction in degrees from true north (000) (example: 221, winds from 221 degrees (southwest).

Field 5: Wind speed in knots (example: 55 knots)

Field 6: Significant wave height in feet (example: 22 feet)

Field 6: Date-time group of the observation, in format YYYYMMDDHHMM); (example: 2 November 2009 at 0008Z).

2.1.1.2.4  Scatterometer-derived Surface Winds

These are surface winds over water derived from satellite scatterometer observations. There is one file per day. The file name is SCT_surface_winds_<DATE>.

Example showing file structure:

METXSCT 200911020000 ASC (12 HOURS WORTH) File Header – data type, date

NOTE: Ignore the (12 HOURS WORTH) in the header – each date-labeled file contains 24 hours of data.

SCT -43.8 -109.1 221 10 200911020014 Observation

Structure of a single observation:

Field 1: Observation type (hardwired to SCT)

Field 2: Latitude (-90 to 90) (example: -43.8, or 43.8S)

Field 3: Longitude (example: -109.1, or 109.1W)

Field 4: Wind direction in degrees from true north (000) (example: wind from 221 degrees (southwest)

Field 5: Wind speed in knots (example: 10 knots)

Field 6: Date-time group of the observation, in format YYYYMMDDHHMM); example: 2 November 2009 at 0014Z).

2.1.1.2.5  Upper Air Observations

Each of the upper air observations files contains observations at a single pressure level on a given day. The database contains observations at 925, 850, 300, 250, and 200 millibars.

File Name

Description

UA_obs_925mb_<DATE>

Upper air observations at 925 mb level for date <DATE>

UA_obs_850mb_<DATE)

Upper air observations at 850 mb level for <DATE>

UA_obs_300mb_<DATE>

Upper air observations at 300 mb level for <DATE>

UA_obs_250mb_<DATE>

Upper air observations at 250 mb level for <DATE>

UA_obs_200mb_<DATE>

Upper air observations for 200 mb level for <DATE>



Example showing file structure:

METXUA1 200911020000 (200 mb) Header with data type, date, level

ASFR1 22.6 -54.7 4019 200/-70/-9/22077 200911020000 Observation

Structure of a single observation:

Field 1: Station ID (example: ASFR1)

Field 2: Latitude (-90 to 90) (example: 22.6, or 22.6N)

Field 3: Longitude (-180 to 180) (example: -54.7, or 54.7W)

Field 4: Height in feet divided by 10 (example: 40190 feet)

Field 5: Pressure in millibars (example: 200 mb)

Field 6: Temperature in degrees Fahrenheit (example: -70F)

Field 7: Dew point depression in degrees Fahrenheit (example: -9F)

Fields 8-9: Wind direction in degrees from true north (000) and wind speed in knots (example: wind from 220 degrees (southwest) at 77 knots

Field 10: Date-time-group in format YYYYDDHHMM (example: 2 November 2009 at 0000Z

2.1.1.2.6  Infrared, Visible, and Water Vapor Winds

These files contain wind data derived from infrared, visible, and water vapor channel observations from satellites. The file structures are similar for all three types.

File Name

Description

IR_winds_100mb_200mb_<DATE>

Infrared channel-derived winds at levels between 100 and 200 mb for date <DATE>

IR_winds_201mb_400mb_<DATE)

Infrared channel-derived winds at levels between 201 and 400 mb for <DATE>

IR_winds_401mb_600mb_<DATE>

Infrared channel-derived winds at levels between 401 and 600 mb for <DATE>

IR_winds_601mb_800mb_<DATE>

Infrared channel-derived winds at levels between 601mb and 800 mb for <DATE>

IR_winds_801mb-1000mb_<DATE>

Infrared channel-derived winds at levels between 810 mb and 1000 mb for <DATE>

VIS_winds_100mb_200mb_<DATE>

Visual channel-derived winds at levels between 100 and 200 mb for date <DATE>

VIS_winds_201mb_400mb_<DATE)

Visual channel-derived winds at levels between 201 and 400 mb for <DATE>

VIS_winds_401mb_600mb_<DATE>

Visual channel-derived winds at levels between 401 and 600 mb for <DATE>

VIS_winds_601mb_800mb_<DATE>

Visual channel-derived winds at levels between 601 mb and 800 mb for <DATE>

VIS_winds_801mb-1000mb_<DATE>

Infrared channel-derived winds at levels between 810 and 1000 mb for <DATE>

WV_winds_100mb_200mb_<DATE>

Water vapor channel-derived winds at levels between 100 and 200 mb for date <DATE>

WV_winds_201mb_400mb_<DATE)

Water vapor channel-derived winds at levels between 201 and 400 mb for <DATE>

WV_winds_401mb_600mb_<DATE>

Water vapor channel-derived winds at levels between 401 and 600 mb for <DATE>

WV_winds_601mb_800mb_<DATE>

Water vapor channel-derived winds at levels between 601 mb and 800 mb for <DATE>

WV_winds_801mb-1000mb_<DATE>

Water vapor channel-derived winds at levels between 810 and 1000 mb for <DATE>

Example showing file structure:

METXIT1 200911020000 (12 HOURS WORTH) Header with data type and date

NOTE: Ignore the (12 HOURS WORTH) in the header – each date-labeled file contains 24 hours of data.

IT1 24.1 46.0 1 259 83 **** **** 200911020030 Observation

Record Structure:

Field 1: Data type and layer. The first two columns are either IT (for IR), VT (for Vis) or WT (for water vapor channel). The third column denotes the layer:
1: 100 to 200 mb
2: 201 to 400 mb
3: 401 to 600 mb
4: 601 to 800 mb
5: 801 to 1000 mb
The example record is for IR-derived data for the 100-200 millibar layer.

Field 2: Latitude (-90 to 90) (example: 24.1, or 24.1N)

Field 3: Longitude (-180 to 180) (example: 46.0, or 46.0E)

Field 4: Layer indicator. Same as Field 1, column 3 (example: 1 = 100-200 mb layer)

Field 5: Wind direction in degrees from true north (000) (example: 259 degrees (WSW))

Field 6: Wind speed in knots (example: 83 knots)

Field 7: 4 stars

Field 8: 4 stars

Field 9: Date-Time-Group (YYYYHHMM) (example: 2 November 2009 at 0030Z)

2.1.2  GRIB Data Flow

The process for GRIB files, which contain gridded data output by meteorological and oceanographic analysis and forecast models, is fundamentally different from that for BUFR files. First, not all GRIB files received are processed; there is a filter list called acceptList.txt that identifies all of the fields that may be used by a client application (basically, the fields that the application should accept as input). A field, in the sense used here, is a grid containing at each grid point a value for a particular parameter (e.g. temperature, pressure, geopotential height, etc), at a particular level or within a particular layer, for a particular base time and forecast hour (tau). Grids may be global or regional; each grid geometry has an identifier. The acceptList.txt file allows the processGRIB.exe process to determine quickly which fields it needs to process and which to discard.

The processing performed on GRIB files is minimal by comparison with the BUFR files. Basically, processGRIB.exe looks at each acceptable file and extracts the parameters identifying the originating center and subcenter, the grid ID, the WMO processing ID, the parameter ID, the level type, the actual level value, and the level 2 value (for fields representing values within a layer, the level and level2 values represent the upper and lower bounds of the layer). It also extracts the base time and forecast hour for the field. All of this information is then written into the metaData.txt file, which is a “catalog” of grid fields contained in the database. Having the relevant data pre-extracted and stored in the metaData.txt file facilitates finding the correct file quickly when a client application requires data.

The acceptList.txt file and the metaData.txt file should only modified occasionally, and then only by a qualified System Administrator. The metaData.txt file is the catalog of files contained in the database; manual modification of this file could potentially affect the integrity of the database itself, so it is best to let the database manage this file.

GRIB files are stored in directories labeled with the base time of the files they contain. The actual GRIB file names are those assigned by the originating agency. These are not changed when the files are inserted into the database. What does matter to a client application is that each file stored has a corresponding entry in the metaData.txt file. The information in the metaData.txt file is what a client application can use to determine whether the database contains the fields it needs, and where to find those fields.

The subparagraphs below describe the formats of the acceptList.txt and metaData.txt files.

2.1.2.1  Format of the acceptList.txt File

This file lists the parameters that the GRIB ingester will process. Any field not on the list will not be stored in the database. The acceptList.txt file also maps the WMO GRIB identifiers to descriptions and display parameters (e.g. display type (wind barbs, contours) and, if applicable, contouring information) that tell an application how to display the information in the field.

The WMO-specified fields in this file come from WMO TD-No. 611,Guide to WMO Binary Code Forms, Part 2, A Guide to the Code Form FM 92-IX Ext. GRIB, Edition 1. Where appropriate, the applicable code table within this publication is cited for each field.

Each line of this ASCII file represents a single parameter. Each line is broken into 17 fields, delimited by the pipe (|).

Field 1: WMO Center ID (nCenterID) (Table 0, Part 1)

Field 2: WMO Sub Center ID (nSubCenterID) (Table 0, Part 2)

Field 3: WMO Grid ID (nGridID) (Table B, although there are additional values supplied by the Centers))

Field 4: WMO Processing ID (nProcID) (Table A says these are supplied by the Centers)

Field 5: WMO Parameter ID (nParmaID) (Table 2)

Field 6: WMO Level Type (nLevelType) (Table 3)

Field 7: Actual Level Value (nLevel1) (Table 3)

Field 8: Level 2 Value, used for layered products like “850mb-500mb mean wind U” (nLevel2) (Table 3)

Field 9: Unique ID, unique per product (nUniqueID)

Field 10: ProductName (pszProductName)

Field 11: Display Type (nDisplayType) (0=contour, 1=color fill)

Field 12: Contouring start value (nContourStart)

Field 13: Contouring end value (nContourEnd)

Field 14: Contouring interval (nContourInterval)

Field 15: Label length code, determines the part of the value to use for labeling contours (nLabelLengthCode) (0=all (4 characters), 1=middle 2 characters, 2=right 2 characters, 3=right 3 characters, 4=left 2 characters, 5=left 3 characters, 6=4th character, 7=3rd character, 8=2nd character, 9=1st character)

Field 16: High/Low label flag (nHiLoLabelFlag)

Field 17: Process code type (nProcessCodeType) (0=contour, 1=wind barbs, 2=direction arrow with feathers on tail, 3=direction arrow with feathers on tip, 4=reverse direction arrow with feathers on tip, 5=boundary data, 6=text data, 7=shading (color fill), 8=grid point data, 9=ocean cross sections and profiles, A=fronts, B=satellites, C=ocean currents).

2.1.2.2  Format of the metaData.txt file

This file is the catalog file for the GRIB portion of the database. It identifies each file stored in the database, together with metadata extracted from the file including the Center and processing IDs, grid and process IDs, parameter, level type, level, and level 2 values, base time, and forecast hour for the field contained in the file. Having all of this information pre-extracted from the field files and stored in metaData.txt allows ATCF to quickly check whether the fields it needs have been ingested, and to quickly determine which files to access to use those fields. This file is created and modified by processGRIB.exe.

Each line of the file corresponds to a single GRIB file stored in the database. There are 11 fields in each line, delimited by the pipe (|). Again, the WMO identifiers come from WMO TD-No. 611, Guide to WMO Binary Code Forms, Part 2, A Guide to the Code Form FM 92-IX Ext. GRIB, Edition 1. Where appropriate, the applicable code table within this publication is cited for each field.

Field 1: WMO Center ID (nCenterID) (Table 0, Part 1)

Field 2: WMO Sub Center ID (nSubCenterID) (Table 0, Part 2)

Field 3: WMO Grid ID (nGridID) (Table B, although there are additional values supplied by the Centers))

Field 4: WMO Processing ID (nProcID) (Table A says these are supplied by the Centers)

Field 5: WMO Parameter ID (nParamID) (Table 2)

Field 6: WMO Level Type (nLevelType) (Table 3)

Field 7: Actual Level Value (nLevel1) (Table 3)

Field 8: Level 2 Value, used for layered products like “850mb-500mb mean wind U” (nLevel2) (Table 3)

Field 9: Base time of the field (also the label for the directory under which the file is stored)

Field 10: Forecast hour (tau) of the field (time in hours after the base time for which the forecast is valid)

Field 11: File name of the GRIB file. These file names are assigned by the data source, and are not significant in ATCF processing.

3  Installing the fasDB Database

The tar file containing the fasDB setup can be obtained at: ftp.nrlmry.navy.mil/ftp/pub/receive/sampson/fasDB.tgz.

Expand the tar file in the directory you wish to run the database. For this example, we will put the database directory in the users home directory:

1) cd ~

2) tar –xvzf fasDB.tgz

Install the cron entries in the fasDB directory (fasDB/cron_entries) in the database owner’s crontab. The directories in the two crontab entries should be modified to match the directories in the database. The example does not necessarily have the correct entries for your installation.

3) crontab –e

4) insert/modify lines

5) :x

The crontab should be ready for action. You can now test the database by copying sample GRIB data into the fasDB/GribData/incoming directory.

6) cd ~/fasDB

7) cp sample_ukmet_2010081112.tar.gz GribData/incoming

8) cd ~fasDB/GribData/incoming

9) tar –xvzf sample_ukmet_2010081112.tar.gz

If the cron entries are working correctly, the data should disappear from the incoming directory and reappear in a ~/fasDB/GribData/dataStore/2010081112 directory. This indicates that the scripts in the crontab are working correctly and that the GRIB data was ingested. If the grib data files don’t disappear from the incoming directory, your cron entries are not working. If the GRIB files disappear and don’t show up in ~/fasDB/GribData/dataStore/2010081112, the ingest log (~fasDB/GribData/logs/*.log) should indicate the problem.

Now check the BUFR data:

10) cd ~/fasDB

11) cp sample_bufr_20100813.tgz BufrData/incoming

12) cd ~fasDB/BufrData/incoming

13) tar –xzvf sample_bufr_20100813.tgz

If the BUFR data is ingested and decoded properly, the files disappear from the incoming directory and reappear in the ~fasDB/BufrData/dataStore/20100813 directory. Similar to the GRIB data, logs exist in the ~fasDB/BufrData/dataStore/logs.

4  Accessing Data within the fasDB Database

An application can access data within the fasDB database by finding the file that contains the data required and opening it to read the data. In the case of point (BUFR) data, the filenames are consistent and standardized to reflect the data contained in each file. For gridded data, the situation is less clear as the filenames do not contain all of the descriptive data needed; therefore an API is provided to return a catalog of the files within the database and metadata describing the contents of each file. The application can then loop through the catalog entries until it finds the data of interest, and the catalog will provide the name of the file that contains the data.

4.1  The getGRIBCatalog API

The API getGRIBCatalog performs the function of obtaining the catalog of GRIB data within the database. It returns a pointer to a GRIBCATALOG structure and a pointer to the number of entries in the catalog. An example showing a call to this API is shown below. The main routine in this example calls getGRIBCatalog to get the catalog data, then calls a couple of subroutines to print the catalog data and free the memory used by the catalog.

#include  <stdio.h>
  #include <stdlib.h>
  #include <string.h>
  #include <time.h>
  
  #include "GRIBAtcf.h"
  
  int main ( int argc, char **argv ) {
      GRIBCATALOG   *pGRIBCat = 0;
      int           nNumFound = 0, nStatus = 0;
  
  printf ( "Main: Entered.\n" );
  
     nStatus =  getGRIBCatalog ( &pGRIBCat, &nNumFound ); /* get the catalog */

  printf ( "Main: Here.\n" );
     printf ( "Status after getGRIBCatalog:  %d\n", nStatus );
  printf ( "%d entries in the GRIBCatalog\n", nNumFound );

     dumpCatalog ( pGRIBCat, nNumFound ); /* print the catalog *

     freeCatalog ( &pGRIBCat, nNumFound ); /* free the memory used */

     exit ( 0 );
  }  /*  End of main  */

  void dumpCatalog ( GRIBCATALOG *pGRIBCat, int nNumFound ) {
     int i = 0, j = 0, k = 0;

     for ( i = 0; i < nNumFound; i++ ) {
        printf ( "Product #%d\n", i + 1 );
        printf ( "\t %s\n", pGRIBCat[i].pszProductName );
        printf ( "\t\t  CenterID = %d, SubCenterID = %d, GridID = %d,
                 ProcID=%d\n", pGRIBCat[i].nCenterID, pGRIBCat[i].nSubCenterID,
                 pGRIBCat[i].nGridID, pGRIBCat[i].nProcID );
        printf ( "\t\t  ParamID=%d, levelType=%d, level1=%d, level2=%d,
                 uniqueID = %d\n", pGRIBCat[i].nParamID,GRIBCat[i].nLevelType,
                 pGRIBCat[i].nLevel1, pGRIBCat[i].nLevel2,
                 pGRIBCat[i].nUniqueID);
        printf ( "\t\t  DisplayType = %d, ContourStart = %d, ContourEnd = %d,
                 ContourInterval = %d\n", pGRIBCat[i].nDisplayType,
                 pGRIBCat[i].nContourStart, pGRIBCat[i].nContourEnd,
                 pGRIBCat[i].nContourInterval );
                 ProcessCodeType = %d\n", pGRIBCat[i].nLabelLengthCode,
                 pGRIBCat[i].nHiLoLabelFlag, pGRIBCat[i].nProcessCodeType );
        for ( j = 0; j < pGRIBCat[i].nNumBT; j++ ) {
            printf ( "\t\t  Basetime (%s:%ld)\n",
                     pGRIBCat[i].pBT[j].pszBasetime,
                     pGRIBCat[i].pBT[j].lEpochTime);
            printf ( "\t\t  Taus:\n  " );
            for ( k = 0; k < pGRIBCat[i].pBT[j].nNumTaus; k++ ){
                 printf ( "\t\t\t%d:(%s)  (%s)\n", pGRIBCat[i].pBT[j].pnTaus[k],
                          pGRIBCat[i].pBT[j].pValidTime[k],
                          pGRIBCat[i].pBT[j].pszFile[k] );
            }
           printf ( "\n" );
        }
   }

}  /*  End of subroutine dumpCatalog  */

4.2  The GRIDCATALOG Structure

This is the structure in which the catalog data are returned by getGRIBCatalog.

typedef struct tagGRIBCatalog {
     int     nCenterID, nSubCenterID, nProcID, nGridID;
     int     nParamID, nLevelType, nLevel1, nLevel2;
     int     nUniqueID;
     int     nDisplayType, nContourStart, nContourEnd, nContourInterval;
     int     nLabelLengthCode, nHiLoLabelFlag, nProcessCodeType;
     char   *pszProductName;
     int     nNumBT;
     BT     *pBT;
} GRIBCATALOG, *PGRIBCATALOG;

The data elements of this structure are as follows. The WMO-specified fields in this file come from WMO TD-No. 611, Guide to WMO Binary Code Forms, Part 2, A Guide to the Code Form FM 92-IX Ext. GRIB, Edition 1. Where appropriate, the applicable code table within this publication is cited for each field.

nCenterID: WMO Center ID (Table 0, Part 1)

nSubCenterID: WMO Sub Center ID () (Table 0, Part 2)

nProcID: WMO Processing ID (Table A says these are supplied by the Centers)

nGridID: WMO Grid ID (Table B, although there are additional values supplied by the Centers))

nParamID: WMO Parameter ID (Table 2)

nLevelType: WMO Level Type (Table 3)

nLevel1: Actual Level Value (Table 3)

nLevel2: Level 2 Value, used for layered products like “850mb-500mb mean wind U” (Table 3)

nUniqueID: Unique ID, unique per product

nDisplayType: Display Type (0=contour, 1=color fill)

nContourStart: Contouring start value

nContourEnd: Contouring end value

nContourInterval: Contouring interval

nLabelLengthCode: Label length code, determines the part of the value to use for labeling contours (0=all (4 characters), 1=middle 2 characters, 2=right 2 characters, 3=right 3 characters, 4=left 2 characters, 5=left 3 characters, 6=4th character, 7=3rd character, 8=2nd character, 9=1st character)

nHiLoLabelFlag: High/Low label flag

nProcessCodeType: Process code type () (0=contour, 1=wind barbs, 2=direction arrow with feathers on tail, 3=direction arrow with feathers on tip, 4=reverse direction arrow with feathers on tip, 5=boundary data, 6=text data, 7=shading (color fill), 8=grid point data, 9=ocean cross sections and profiles, A=fronts, B=satellites, C=ocean currents).

pszProductName: ProductName (filename)

nNumBT: Number of base times

pBT: Pointer to the base times structure, defined as follows:

typedef struct tagBT {
char *pszBasetime;
long lEpochTime;
int nNumTaus;
int *pnTaus
namestring *pszFile;
timestring *pValidTime;
} BT, *PBT; === === typedef char timestring[16]; typedef char namestring[128];

4.3  getGRIBCatalog Usage Example

The following example shows snippets of the code used to extract model data for the “bams” aid in the Automated Tropical Cyclone Forecasting (ATCF) application, and demonstrates the use of getGRIBCatalog by an application to find the filename of the file containing data matching a set of input parameters.

  if ( !pGRIBCat ) {
       nStatus = getGRIBCatalog ( &pGRIBCat, &nNumFound );
  }
	.
	.
  .

       nStatus = getFName ( pGRIBCat, nNumFound, 58, 0, 240, 83, szBaseTime,
                            *itau, *parm, 101, *presl, *presu, &pszFileName );

     /*  Sub getFName, referenced above.  */

     /* This subroutine gets the filename of the file containing information */
     /* that matches the input criteria. Inputs are the catalog itself, the */
     /* number of items in the catalog, and the metadata parameters identifying */
     /* the desired field. */

     int getFName ( GRIBCATALOG *pGRIBCat, int nNumFound, int nCenterID,
                    int nSubCenterID, int nGridID, int nProcID, char *szBaseTime,
                    int nTau, int nParam, int nLevelType, int nLevel1,
                    int nLevel2, char **ppszFileName ) {
        int i = 0, j = 0, k = 0, nStatus = 0;
        char *pszFileName = 0;

        /* find entries matching center, grid, parameter, level inputs */

       for ( i = 0; i < nNumFound; i++ ) {
          if ( ( pGRIBCat[i].nCenterID == nCenterID ) &&
               ( pGRIBCat[i].nSubCenterID == nSubCenterID ) &&
               ( pGRIBCat[i].nGridID == nGridID ) &&
               ( pGRIBCat[i].nProcID == nProcID ) &&
               ( pGRIBCat[i].nParamID == nParam ) &&
               ( pGRIBCat[i].nLevelType == nLevelType ) && 
               ( pGRIBCat[i].nLevel1 ==  nLevel1 ) &&
               ( pGRIBCat[i].nLevel2 ==  nLevel2 ) )
          {
            /* find the desired base time */
            for ( j = 0; j < pGRIBCat[i].nNumBT; j++ ) {
              if ( ! strncmp ( pGRIBCat[i].pBT[j].pszBasetime,
                   szBaseTime, 10 ) ) {
                       /* find the desired forecast hour (tau) */

                       for ( k = 0; k < pGRIBCat[i].pBT[j].nNumTaus; k++ ) {
                           if ( pGRIBCat[i].pBT[j].pnTaus[k] == nTau ){
                               printf ( "Got a match:  %s\n",
                                        pGRIBCat[i].pBT[j].pszFile[k] );
                      
                               /* get the filename */

                               pszFileName = ( char * ) calloc ( 1,
                                              strlen ( pGRIBCat[i].pBT[j].pszFile[k] ) +
                                              8 );
                               strncpy ( pszFileName, pGRIBCat[i].pBT[j].pszFile[k],
                                         strlen ( pGRIBCat[i].pBT[j].pszFile[k] ) );
                               *ppszFileName = ( char * ) pszFileName;
                               return ( 0 );
                       }  /*  End of tau match  */>
                 }  /*  End of for each tau  */
              }  /*  End of if basetime match  */
           }  /*  End of for each basetime  */
        }  /*  End of product match  */
     }  /*  End of for each product  */

return ( 1 );
}  /*  End of subroutine getFName  */

5  Referenced Documents

WMO TD-No. 611
May, 1994

World Weather Watch Technical Report No. 17, Guide to WMO Binary Code Forms

6  Notes

6.1   Glossary of Acronyms

ATCF

Automated Tropical Cyclone Forecasting

WMO

World Meteorological Organization

METOC

Meteorological and oceanographic

API

Application Program Interface

NRL

Naval Research Laboratory

MMD

Marine Meteorology Division

BUFR

Binary Universal Format for the Representation of meteorological data

GRIB

Gridded Binary format