Level Circuit Adjustment

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

SURVEY ADJUSTMENTS.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
TRAVERSING Computations.
The adjustment of the observations
Class 25: Even More Corrections and Survey Networks Project Planning 21 April 2008.
GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 11 Notes Class notes for ISE 201 San Jose State University.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
1 Systems of Linear Equations Error Analysis and System Condition.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 Least.
Linear Simultaneous Equations
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Principles of Least Squares
Weights of Observations
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Least-Squares Regression
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
Stat13-lecture 25 regression (continued, SE, t and chi-square) Simple linear regression model: Y=  0 +  1 X +  Assumption :  is normal with mean 0.
1 SVY207: Lecture 18 Network Solutions Given many GPS solutions for vectors between pairs of observed stations Compute a unique network solution (for many.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
The ‘a priori’ mean error of levelling. Computation of heighting lines and joints Budapesti Műszaki és Gazdaságtudományi Egyetem Általános- és Felsőgeodézia.
Linear Regression James H. Steiger. Regression – The General Setup You have a set of data on two variables, X and Y, represented in a scatter plot. You.
Chapter 8 Curve Fitting.
Lecture 18: Vertical Datums and a little Linear Regression GISC March 2009 For Geoid96.
Confidence intervals for the mean - continued
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Adjustment of Level Nets. Introduction In this chapter we will deal with differential leveling only In SUR2101 you learned how to close and adjust a level.
SVY207: Lecture 10 Computation of Relative Position from Carrier Phase u Observation Equation u Linear dependence of observations u Baseline solution –Weighted.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
SURVEYING II (CE 6404) UNIT II SURVEY ADJUSTMENTS
Data Modeling Patrice Koehl Department of Biological Sciences
Matrices Introduction.
Coordinate Transformations
Linear Regression Modelling
Chapter 7. Classification and Prediction
Adjustment of Trilateration
Part 5 - Chapter
ECE 3301 General Electrical Engineering
Part 5 - Chapter 17.
Ordinary Least Squares (OLS) Regression
By Scott Hunt and Jonathan Funtanilla
Part 5 - Chapter 17.
RECORD. RECORD Gaussian Elimination: derived system back-substitution.
CONCEPTS OF ESTIMATION
Regression Models - Introduction
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
IV and Modelling Expectations
Systems of Linear Equations in Engineering
Linear regression Fitting a straight line to observations.
Singular Value Decomposition SVD
The Simple Linear Regression Model: Specification and Estimation
5.2 Least-Squares Fit to a Straight Line
5.4 General Linear Least-Squares
Lecture 17: Geodetic Datums and a little Linear Regression
OVERVIEW OF LINEAR MODELS
Discrete Least Squares Approximation
ARRAY DIVISION Identity matrix Islamic University of Gaza
Leveling.
Principles of the Global Positioning System Lecture 11
Mathematical Sciences
PROGRAMME F5 LINEAR EQUATIONS and SIMULTANEOUS LINEAR EQUATIONS.
Leveling.
6.1.1 Deriving OLS OLS is obtained by minimizing the sum of the square errors. This is done using the partial derivative 6.
Traversing.
Regression and Correlation of Data
Matrices and Linear Equations
Presentation transcript:

Level Circuit Adjustment

Level Circuit Adjustment Permissible misclosures in leveling are usually based on lengths of level lines for example: Geodetic levelling  2Kmm Secondary levelling  5Kmm Tertiary levelling  12Kmm engineering purposes  6Kmm Hilly regions  24Kmm Where the length of the line leveled is K kilometers

Level Circuit Adjustment Permissible misclosures in leveling are also sometimes based on the number of instrument set-ups for example: Tertiary levelling 32n Hilly regions 62n in mm, with n being the number of set-ups

Level Circuit Adjustment Circuit adjustments are therefore normally based on the lengths of the lines or the number of setups: example +10.60 + 5.42 - 8.47 - 7.31 = 0.24 +10.60 B +5.42 100.00 A C D -7.31 -8.47

Level Circuit Adjustment To distribute the error by distance: Total adjustment = misclosure/total distance = 0.24/3.0 = 0.08 Adjustments to each line = adjustment x length of line = - 0.08, - 0.06, - 0.06, - 0.04 giving adjusted heights of stations B, C and D = 110.52, 115.88,107.35 +10.60 + 5.42 - 8.47 - 7.31 = 0.24 +10.60 1.0 B +5.42 100.00 0.7 A 0.5 C D -7.31 0.8 -8.47

Level Circuit Adjustment To adjust by number of setups, assuming 1 setup per line: Adjustment = misclosure/number of setups = 0.24/4 = 0.06 Adjusted elevation differences = 10.54, 5.36, -8.53, -7.37 Adjusted elevations for stations B, C and D = 110.54, 115.90, 107.37 +10.60 B +5.42 100.00 A C D -7.31 -8.47

Level Network Adjustment Least Squares Applied to Level Networks

Least Squares Applied to Level Networks (See Wolf and Ghilani 1997) Least Squares may be used to determine the best estimates of the heights of the unknown stations in a level network where heights can be obtained from several loops. The mathematical model for deriving the height of an unknown station from a known height and observations from differential leveling is: j Ei i Ei Datum

Least Squares Applied to Level Nets For a leveling network there are redundant ways of determining heights of unknown stations BM X = 100.00 line Obs. diff 1 5.10 2 2.34 3 -1.25 4 -6.13 5 -0.68 6 -3.00 7 1.70 1 4 B 7 5 A C 6 2 3 BM Y = 107.50

Least Squares Applied to Level Nets Observation equations can therefore be formed to account for the differences in the values for the unknown heights that would be obtained from using the different observations From the 7 lines 7 equations can be formed to relate the observations to the measurements The heights of the 3 stations, A, B and C are the 3 unknowns There are therefore 4 redundant observations.

Least Squares Applied to Level Nets The 7 observation equations are: BM X = 100.00 = -6.13 1 = 5.10 4 = 1.70 B 7 5 A C = -0.68 6 = -3.00 2 = 2.34 3 = -1.25 BM Y = 107.50

Least Squares Applied to Level Nets BM X = 100.00 1 4 B 7 5 A C 6 2 3 BM Y = 107.50

Least Squares Applied to Level Nets The least squares process requires that the residuals are squared then summed then minimized =

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets These are the Normal Equations

Least Squares Applied to Level Nets Reducing the normal equations gives For A:

Least Squares Applied to Level Nets For B:

Least Squares Applied to Level Nets For C:

Least Squares Applied to Level Nets The normal equations are then solved simultaneously to yield the value of the unknowns: From (1): From (3):

Least Squares Applied to Level Nets Substitute (4) and (5) into (2):

Least Squares Applied to Level Nets Substitute the value of C into (5): Substituting the value of B into (1):

Least Squares Applied to Level Nets Go back to the observation equations to compute the residuals: =

Least Squares Applied to Level Nets The problem can be solved using matrices. First form the observation equations:

Least Squares Applied to Level Nets The X matrix is simple to construct: Next construct the A matrix:

Least Squares Applied to Level Nets The V matrix: The L matrix:

Least Squares Applied to Level Nets The normal equations are formed directly:

Least Squares Applied to Level Nets The inverse of ATA is

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets To obtain the cofactor matrix:

Least Squares Applied to Level Nets The determinant of (ATA) is: The inverse of (ATA) is:

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets

Least Squares Applied to Level Nets To compute the residuals use: V = AX - L