How Does GPS Work?
Eric Olson | October 30, 2018The Global Positioning System (GPS) is a global navigation satellite system continuously broadcasting signals that allow receivers anywhere in the world to determine their location. Launched as a military project by the U.S. Department of Defense in 1973, the system became fully operational in 1995. GPS is now also used for myriad civilian applications including navigation, surveying and clock synchronization.
At a basic level, GPS enables position determination through a process of trilateration. This method establishes an unknown location’s position geometrically by measuring the distances to three known locations. For GPS, these locations are the exact positions of satellites in a constellation of about 30 satellites circling the planet in twenty-thousand-kilometer-high medium-Earth orbits. A GPS receiver – for example, a smartphone – requires, at a minimum, a lock on four satellites. This is because the position solution involves resolving time discrepancies arising from imperfect satellite and receiver clocks in order to calculate the receiver’s three-dimensional coordinates.
Each satellite broadcasts a signal containing its precise orbital position and the exact time of the signal’s transmission. Receivers on the ground (or in the sky on aircraft) pick up this signal and calculate their distance to the satellite using a simple relationship: distance = velocity x time. Although this equation is straightforward, there are a number of subtleties involved in the velocity and time components that make the solution complex. We will delve into these details later, but for now, let’s take a look at the structure of a GPS signal.
[Discover GPS Instruments and Devices and GPS Software on Engineering360.]
GPS Signals
The signal broadcast by each GPS satellite has its own complexities. To start, the satellites broadcast at multiple frequencies. The oldest GPS satellite in operation, known as Block IIA, broadcasts at two frequencies: L1 (1575.42 MHz) and L2 (1227.60 MHz). Over time, the GPS system has evolved with the launch of new generations of satellites. Beginning in 2005, Block IIR-M satellites began launching with a second civil signal known as L2C broadcasting at the same frequency as L2. Block IIF satellites launched from 2010 to 2016 added a third civil signal, L5. Block III satellites, scheduled to begin launching in December 2018, will add a fourth civil signal, L1C. The new signals promise enhanced accuracy, reliability and integrity, and each new generation of satellites remains backward compatible, broadcasting all legacy signals. In addition, GPS satellites broadcast secure positioning signals for military use as well as a dedicated L3 signal at a frequency of 1381.05 MHz that communicates data from satellite sensors used to detect and locate nuclear detonations in and above Earth’s atmosphere.
Embedded in the GPS positioning signals are modulated data containing satellite orbital position and clock information necessary for receivers to compute their location on the ground. For example, the GPS L1 Coarse Acquisition (C/A) signal consists of a sinusoidal carrier wave oscillating at 1575.42 MHz phase modulated with a Pseudo-Random Noise (PRN) signal with a frequency of 1.023 MHz and a navigation signal at 50 Hz. The PRN signal contains codes that enable receivers to calculate signal travel time between satellite and receiver, while the navigation signal contains the satellite’s orbital position and velocity, its operational status, and other information.
[Discover GPS Chips and Modules on Engineering360.]
Recall the simple equation used to determine the distance between a GPS receiver and a GPS satellite: distance = velocity x time. To find the time component, receivers generate a replica of the PRN code based on an estimated physical model and compare it to the actual PRN code from the satellite. The phase offset between the two reveals the time it took for the signal to travel from the satellite to the receiver.
Sources of Error
It might be assumed that because GPS signals are simply electromagnetic waves, the velocity component must equal the speed of light. In reality, the signal is affected by delays that occur in two layers of the atmosphere: the ionosphere and the troposphere. In the ionosphere, charged particles generated by ionizing solar radiation interfere with the signal. In the troposphere, varying pressure, temperature and humidity affect the signal. On top of this, both of these atmospheric delays are constantly changing, with conditions in the ionosphere fluctuating with solar activity and conditions in the troposphere subject to shifting weather.
The time component in the distance calculation is similarly affected by uncertainty. In an ideal world, the time it takes the GPS signal to travel from the satellite to the receiver simply equals the time stamp generated by the receiver at the time of signal reception minus the time stamp generated by the satellite at the time of signal transmission. The problem is that the time stamps contain errors due to imperfect internal clocks. GPS satellites are equipped with highly stable rubidium and cesium oscillators, but GPS receivers often have cheaper and less precise quartz oscillators.
[Discover Oscillators on Engineering360.]
Another source of error lies in the satellite orbits, which differ ever-so-slightly from their reported orbits due to variations in drag from the solar wind, the less-than-perfect uniformity of Earth’s gravitational field, and tidal forces from the sun and the moon.
To correct for satellite clock and orbit errors, GPS relies on a ground-based control segment, a network of monitoring, control and upload stations. The monitoring stations continuously track satellite signals and forward the data to the master control station for analysis. The master control station calculates corrections to the satellite clock parameters by comparing the time stamps from the satellite clocks to an extremely accurate cesium atomic clock. It also computes corrected orbital position values when the satellites’ reported orbital position varies from its true position. The corrected data is then uplinked to the satellites by the upload stations so that future GPS satellite signals contain the updated orbital position data along with estimates of the satellite clock offset.
Despite these painstaking steps to provide highly accurate satellite clock and orbit data in GPS signals, a significant amount of uncertainty remains baked into the clock offset and orbital position data, resulting in about ±5 meters of position error.
There are additional sources of error that creep into the equation as well. The electronics of the receiver are inherently noisy, like static on a car radio. And GPS signals, originating from satellites far overhead, can bounce off buildings before reaching a receiver, leading to “multipath” effects.
So the simple equation calculating the range between a GPS receiver and a GPS satellite (distance = velocity x time) must be modified to correct for all of these sources of error. The result is a “pseudorange” – not quite the actual distance between receiver and satellite, but as close as we can get:
p = ρ + dρ + c (dt − dT) + dion + dtrop + εmp + εp
where:
p = the pseudorange measurement
ρ = the true range between receiver and satellite
dρ = satellite orbital errors
c = the speed of light
dt = satellite clock offset from GPS time
dT = receiver clock offset from GPS time
dion = ionospheric delay
dtrop = tropospheric delay
εmp = multipath effects
εp = receiver noise
Various approaches exist to minimize these errors, in some cases achieving sub-centimeter levels of position accuracy. These enhanced precision techniques are discussed in Part 2 of this series.
Further Reading
GPS and GNSS for Geospatial Professionals | Penn State's College of Earth and Mineral Sciences
An Introduction to GNSS | NovAtel Inc.
NAVIPEDIA | European Space Agency
It was interesting to read. Thanks!