9.1.5 Introduction to GPS Network Processing

OUTLIER TESTING AND RESIDUALS

 


The following is mostly taken from HARVEY (1994), although any textbook on Least Squares adjustment or statistics would give similar information and advice. The residuals are given by:

(9.1-13)

where are the adjusted parameters.


Residual testing in general assumes that the errors in observations, and the residuals, are normally distributed. Hence, before statistical tests can be applied it may be necessary to check/test that the residuals are normally distributed.

The familiar bell-shape of the Normal Distribution frequency curve indicates that relatively large residuals can be expected, although these should occur much less frequently than relatively small residuals. For example, 99.7% of all residuals should be less than ±3 times the "root-mean-square" value of the residuals (= the square root of the sum of squares of the residuals divided by the number of residuals), which can be considered an estimate of the square root of the variance (standard deviation) of the observations . Thus the chance of a residual exceeding 3 is very small. (This is the basis of the oft used "rule-of-thumb" that rejects any observation with a residual exceeding 3 times the standard deviation of the observations.)

If one or more of the residuals are significantly larger than either the other residuals in the set, or the residuals obtained from similar adjustments in the past, then it must be decided whether:

There is no clear cut boundary between a "small" error (expected in any observation, a "normal" occurrence!), and a "large" error which can be considered "unnatural". At what cutoff point is an error assumed to belong to a Normal Distribution (ND), or to an "Alternative" (unknown) Distribution (AD)? This cutoff point is known as the critical value (CV), hence below the CV the errors belong to the ND and above the CV the errors belong to the AD.

The CV is based on the standard deviation, hence figures such as 1.96, 2.58 and 3.29 correspond to probabilities of 95%, 99% and 99.9% respectively. The figure chosen for the CV will determine what percentage of good observations will be incorrectly rejected. If 2.58 times the standard deviation (99% confidence level) is selected as the CV, it is expected that 1% of good data is rejected (together with any observations with "true" gross errors) -- this is a so-called Type I error. The CV figure defines the level of significance of the test (=0.05, 0.01 and 0.001, corresponding to the confidence levels 95%, 99% and 99.9% respectively), and the probability of making a Type I is therefore a function of the CV (=5%, 1% and 0.1%, corresponding to a =0.05, 0.01 and 0.001 respectively).

Second type of false outcome of observation/residual testing is to accept a bad observation (that is, assume it belongs to a Normal Distribution), when it should be rejected (that is, it belongs to an Alternative Distribution) -- in the statistics literature this is referred to as a Type II error. The probability of making a Type II error is referred to as the power of the test (=0.30, 0.20 and 0.10, corresponding to probabilities 30%, 20% and 10% respectively). Hence, if the power is set to 20%, this means that there is a 20% chance of incorrectly accepting an observation that should have been rejected (or, an 80% chance of correctly detecting an outlier when one occurs).

The Alternative Distribution may also be a Normal Distribution, but with a different mean and standard deviation -- see Figure below. This would be the situation if the observations are systematically biased in some way. These observations may still be considered outliers.




Residuals may belong to either a biased (RH) or unbiased (LH) Normal Distribution.


Residual testing is usually carried out not on the residual itself, but on a dimensionless quantity known as the "normalised residual" ui = vi/vi, where vi is the square root of the diagonal of the cofactor matrix of the residuals Q ( eqn (7.1-13), in the case of the Least Squares parametric method ) :

(9.1-14)

When observations are unbiased (that is, contain no gross error), the normalised residuals are centred around the lefthand ND (see Figure above). This observation is accepted within the band set by the choice of the level of significance (here 1%, or 0.5% either side of the mean). However, if the observation is biased then the normalised residual will also be biased, and its distribution will be centred around another mean (righthand ND). There is still a chance that the value of the anomalous normalised residual will fall within the band between -2.58 and +2.58 standard deviations of the mean of the unbiased residuals, and would be incorrectly accepted as an unbiased residual (and therefore an unbiased observation). The probability of this happening is 20%. The value is referred to as the upper bound (UB) and its magnitude is the sum of a and b, where a is a function of the parameter , and b is a function of the parameter . For example, if =0.01 and =0.20, then a=2.58 and b=0.84, resulting in =3.42.

How to detect an outlier? Apart from using the above mentioned "rule-of-thumb", there are several statistical tests that can be applied to the residuals, which do not require a modification of the secondary adjustment process. The two most common outlier detection techniques are (CROSS, 1983; HARVEY, 1994):

Baarda's Data Snooping method:

Pope's Tau Test:


The following comments can be made regarding the detection, and subsequent elimination, of observation outliers in GPS Least Squares secondary adjustments:


Not mentioned here are procedures based on modifying the Least Squares adjustment process itself, to either make it easier to detect outliers or to make the adjustment procedure less sensitive to the presence of outliers, as in the case of "robust" Least Squares.

	

Back to Chapter 9 Contents / Next Topic / Previous Topic

© Chris Rizos, SNAP-UNSW, 1999