You need to get javascript to use this page.


Welcome to the GAMIT and GLOBK FAQ Page

1. GLOBK will not run when compiled with g77. How do I configure gcc and g77 for GAMIT and GLOBK.

2. When using sh_gamit my bad apriori coordinates seem to keep coming back even when I use the lfile update option.

3. Why are there differences between GLOBK and GLRED/ENSUM velocity results(in a general sense)? Related question: in which results do you put the most confidence?

4. What are the rules that control renames with position changes and earthquakes in the earthquake file? For example: what happens if I am moving a site, and changing its name over an interval that stradles an earthquake.

5. How should I interpret the uncertainties given by the output files of GAMIT and GLOBK?

FAQ 1. GLOBK will not run when compiled with g77. How do I configure gcc and g77 for GAMIT and GLOBK To compile and run GAMIT and GLOBK using the GNU gcc/g77 compiler you will need to build a custom version of the gcc/g77 compiler. This process is necessary as the default version of gcc/g77 only allows a maximum unit number (MXUNIT) of 99, while GLOBK requires a MXUNIT of 9999. You will need to download source code for the gcc/g77 compilers, unpack the tar files and read the installation instructions carefully. Take special note of the g77 customization section which contains the folowing information:

Larger File Unit Numbers

As distributed, whether as part of `f2c' or `g77', `libf2c' accepts file unit numbers only in the range 0 through 99. For example, a statement such as `WRITE (UNIT=100)' causes a run-time crash in `libf2c', because the unit number, 100, is out of range.

If you know that Fortran programs at your installation require the use of unit numbers higher than 99, you can change the value of the `MXUNIT' macro, which represents the maximum unit number, to an appropriately higher value.

To do this, edit the file `gcc-2.95.2/libf2c/libI77/fio.h' in your `gcc' source tree, changing the following line:

#define MXUNIT 100

Change the line so that the value of `MXUNIT' is defined to be at least one *greater* than the maximum unit number used by the Fortran programs on your system.

(For example, a program that does `WRITE (UNIT=255)' would require `MXUNIT' set to at least 256 to avoid crashing.)

Then build or rebuild `gcc' as appropriate.

Use the following links to read the complete gcc INSTALL document and see an example installation.

FAQ 2. When using sh_gamit my bad apriori coordinates seem to keep coming back even when I use the lfile update option The most likely reason for this is bad coordinates in the apr file (aprf variable set in process.defaults). Each time sh_gamit is run, the coordinates in the apr file are used to generate the lfile. Any sites whose positions are in an existing lfile in the tables directory, but not in the apr file, are appended to the lfile generated from the apr file. (This approach allows the lfile coordinates to change with time for sites that are in the apr file). So, if a site does not appear in the apr file, then its coordinates will copied from the updated lfile. But, if the bad coordinates are in the apr file, then the bad coordinates will continue to be used.

The solution here is to ensure that only sites with well known coordinates and velocities appear in the apr file. Other site coordinates will then be automatically generated and updated by sh_gamit. If a long series of data is being processed, then at some point globk should be run to determine positions and velocities, and these new estimates added to the apr file.

FAQ 3. Why are there differences between GLOBK and GLRED/ENSUM velocity results (in a general sense)? Related question: in which results do you put the most confidence? What many users don't appreciate is that GLRED is simply a front-end program to drive GLOBK. Its use is to conveniently allow many runs of GLOBK using a single command. The real distinction in generating a velocity field is that when GLOBK is directly used with velocities estimated all the correlations between the sites and any temporal correlations are accounted for. When GLRED is used, usually each day of data is processed separately and then a simplier program (such as ENSUM) is used to make linear fits to time series one-component (ie., North, East and Up) and one station at a time. Whether the two approaches generate the same result or not depends on nature data being analyzed and how well known the reference frame for the time series analysis is known.

The original "design/use" of GLOBK was for piecing together networks that at times might not have overlapping reference stations. Since all the results are combined in a single analysis, the reference frame can be built during the analysis. In fact, our preferred style of analysis is to keep all stations and velocities loosely constrained during the initial combination of all of the data with GLOBK. When this combination is completed, the program GLORG is used to determine the rotations (and possibly translations and scale) and their rates of change that best align the loose combination with some reference frame (such as ITRF97 or stations on the stable part of plate where the initial assumption would be zero motion). In this type of analysis only the station velocities, not the coordinates need to be constrained (see, e.g., Herring et al., Geophys. Res. Lett., 18, 1893-1896, 1991, or Feigl, et al., J. Geophys. Res. , 98, 21,677-21,712, 1993.)

When GLRED is used, the reference frame needs to realized for each day of data. Here again we usually do a loose combination and then apply rotation (and possibly translation/scale) to realize the frame. So the use of GLRED assumes you know the reference frame before hand. GLOBK on the other hand can be used to define the reference frame while obtaining the velocity solution.

The pros and cons of each approach:

(a) the GLOBK approach usually takes longer to run because the Kalman Filter (KF) state vector contains positions and velocities for all sites used over the duration of all of the surveys. (The use_site/use_pos/use_num commands can be used to limit the number of stations actually used in the analysis). GLRED on the other hand only needs a state vector of the positions of the sites used in each observation session. (Run time goes as cube of the length of the state vector). In theory, with the correct process noise models used, the GLOBK analysis is the more rigorous approach.

(b) The GLOBK approach can be sensitive to bad position determinations. For example, a bad antenna height measurement at one station will effect the height velocity for that station. Because GLOBK accounts for correlations, this error in the height rate error will affect the horizontal velocity estimates and because of the correlations between stations, it will also affect velocities at other stations. With GLRED, because the inter-site and inter-components correlations are ignored in the fitting to the time series, only the height rate at the bad station is affected.

(c) Since GLRED resolves the reference frame for each observations session, the rotation and translations can vary randomly between sessions. In GLOBK, depending on how the analysis is set up, the translation, for example, is constrained to move along a linear trend. Using the mar_tran command in globk to set the process noise on the translations effectively allows GLOBK to emulate the GLRED behavior. But if mar_trans is not used, then this can contribute to differences between the two approaches.

(d) GLOBK allows temporal process noise to used in the analysis. We will often assign random walk process noise to stations to account for temporal correlations in the results. This can be done also in the time series analysis from GLRED although none of fitting programs does this.

(e) GLOBK is more sensitive to numerical problems when many surveys are combined. Since all the results are "stacked" into a single covariance matrix that is propagated forward in time, a single numerically unstable observation session can corrupt the matrix and since it is propagated forward in time, this instability will tend to grow as more data are added. We have had problems with the early IGS SINEX files which were unstable when deconstrained and in some cases were bad from the start due to bugs in the analysis centers' software. In GLRED, these bad matrices only affect the specific observation session whereas in GLOBK the effect will propagate into all results.

(f) Of course, to use GLRED you need to be able define the (coordinates) refenence frame for each observation session. For a globally defined frame, this is now easy using the IGS stations and ITRF coordinates and velocities. For a regionally defined frame, you can first perform a GLOBK solution to with the frame defined globally, and then use the estimated coordinates and velocities to define the regional frame for each observation session.

Our usual approach is to used GLRED to check the quality of the data and to remove and/or fix problematic stations and sessions. We then use GLOBK to generate the final solution. For early surveys (1980's and early 1990's) we often iterate this whole approach (ie, we use the coordinates and velocities from the GLOBK run to better define the reference frame for the GLRED runs. We also use the GLRED run to define the process noise that we use in the GLOBK run.

Any large differences between the GLOBK and GLRED runs does need to be investigated to see what is causing them.

FAQ 4. What are the rules that control renames with position changes and earthquakes in the earthquake file? For example: what happens if I am moving a site, and changing its name over an interval that stradles an earthquake.

The logic in globk can get very complicated in these cases and the best solution is to be as explicit as possible in specifying to globk want you want to do. As a general rule: The first name in the rename must be the name of the site in the input hfiles. The second name should be the name you want the site renamed to. So if you are moving a site and changing its name to map between nearby sites (ie., generating one time series for two different sites) and the time range of the move stradles an earthquake for which you are renaming the sites, the the safest solution is to have two rename entries, one for before the earthquake and one for after the earthquake with the latter containing as the second entry the name of the site after the earthquake.

Also if you have a series of moves you want to make to a site (small incremental changes over time), the safest method is to have have no overlap in times of the renames with moves. These should also be broken at the time of the Earthquake. If the site name is not being changed except by the earthquake, it should work. In your output, you should see new renames entries that have beem generated automatically (broken at the time of the hfile nearest the earthquake).

There are additional cautions for renames with position changes. The renames with position changes should be direct i.e., name change from the original name to the name wanted. Specifically, the following cases will not work sio1 -> sio2 -> sio3 for files than contain sio1 only. In this case there should be two rename entries one for sio1 -> sio3 and the other for sio2 -> sio3. NOTE: This feature with position changes only works correctly for overlapping time intervals in globk 5.05 and later. There can be problems with these types of renames when multiple hfiles are combined (as opposed to glred runs using on hfile at a time). As of 10/04/2000 we are still investigating if the latter works correctly.

For small station position changes, due for example to adding a raydome, for which you accurately know the position change, the current reccommended procedure is to update to reflect the position change you want to apply, and to use hfupd to update you hfiles.

For generating merged time series, it is reccommended that you use the ensum feature which will automatically generate merged time series if you have solutions with both sites observed in the same hfiles.

FAQ 5. How should I interpret the uncertainties given by the output files of GAMIT and GLOBK?

The velocities given by GLOBK are formal propagations (with no scaling or resetting of confidence levels) of the uncertainties implicit in the h-files from the GAMIT output, which are themselves unscaled propagations of the a priori error assigned to the phase observations (10 mm L1 one-way, 64 mm LC double differences). In a purely formal sense, the GLOBK uncertainties are one-sigma, but since the a prior assigned errors are arbitrary, as is the sampling interval, you can attach no statistical significance to the GAMIT or GLOBK uncertainties. (They happen to be within a factor of 2, usually, of realistic one-sigma uncertainties, which tempts users to take them seriously.)

Deriving realistic uncertainties for position or velocity estimates necessarily involves an assessment of the parameter estimates at some later stage; e.g., repeatabilities of daily, monthly, and/or yearly positions, consistency of velocity estimates, etc. You can get an idea of current thinking about this by looking at Mao et al. [JGR 104, 2797, 1999] and McClusky et al. [JGR 105, 5695, 2000]. To implement these strategies in our GAMIT runs, we rescale the phase uncertainties using the autcln output from postfit editing ('Use N-file' in sestbl.); and in GLOBK, we rescale the h-files (in the gdl list) and/or apply random walk noise (mar_neu) and/or reweight individual stations (sig_neu).