You Are Here: How Google Improves Mobile Location Accuracy

Eric van Rees

Updated:

Where does the blue dot come from? Ed Parsons explains how Google continues to innovate in the location technology sector.

Ed Parsons is Google’s geospatial technologist.

In this article he explains how a mobile phone calculates its location and explains how Google uses smartphone capabilities and its own location-based services and data to offer its users a better user experience, whether that’s indoor location or pedestrian navigation in a big city.

Ed Parsons, Google’s geospatial technologist

For the last fifteen years, Ed Parsons has been Google’s geospatial technologist.


Free weekly newsletter

Fill out your e-mail address to receive our newsletter!
Email:  

Originally an academic, he taught GIS courses at Kingston University. He changed the public sector for the private sector when he joined Google after working for Ordnance Survey, the UK national mapping agency.

Over the years, he saw Google grow, and as a result of that, the need for putting more controls to manage. However, he stresses that there’s still an element of the startup ethos within Google today. 

Here, he shares some of the innovations Google has been working on in the location technology space, in order to provide a better location and mapping user experience for the user, both indoor and outdoors

The idea of the fused location service

While everyone is used to being able to position themselves using their mobile phone today, there was a time when GPS-enabled smartphones were rare.

Spray painted white signed with a Red Cross and blue lettering that says, "You Are Here".

Parsons remembers that the first GPS-enabled smartphones created a paradigm shift and literally put you on the map, showing your location in the center of a map on a mobile phone, doing away with having to work out where you were.

Parsons explains: “In a sense, this paradigm shift is what the ‘blue dot’ represents: mass access to online mapping capabilities through GPS-enabled smartphones”. 

Today, many people still think the ‘blue dot’ that represents one’s location on a map comes from GPS.

Screenshot of Google Maps on an Android Phone showing the blue do that indicates the phone holder's location. Acquired 16-November-2021.
Screenshot of Google Maps on an Android Phone showing the blue do that indicates the phone holder’s location. The light blue circle around the darker blue circle indicates the spatial range in which your position is actually located. Acquired 16-November-2021.

However, in reality, today’s ‘blue dot’ embodies a plethora of complex technologies and services that are running on both the network and the mobile device to help define one’s location, which Parsons describes as the ‘fused location service’.

He states that what one would call GPS is actually a lot more complicated than the simple label will make it appear: “What we’re really talking about is multiple GNSS (Global navigation satellite systems), whose signals can be received by a chip set in your mobile phone. They all use the same technology, so a single phone can receive the signals of multiple constellations.” 

But there’s more to determining one’s location using mobile technology than GNSS.

Another element is the closest cell tower to your location, which is required to route a call to the device in case someone calls you.

“This means that every mobile phone we use today has to know where it is. You can build on that coarsest resolution given by a cell tower with GPS. However, in many circumstances this isn’t the best technology to use, as the signal is often bounced or impossible to receive, for example when you’re limited by your direct environment to receive radio signals”.

Using Wi-Fi hotspots to identify one’s location

These shortcomings led Google to search for alternative technologies, such as identifying one’s location based on proximity to Wi-Fi hotspots.

Today’s world consists of connected Wi-Fi devices, both at home and in public spaces. Using a Wi-Fi hotspot’s unique identifier, its Mac address, it becomes possible to map a hotspot to a particular location and then build a database of those hotspots and locations to use that provide the device location when it is connected to a particular hotspot.

“Not only is this a robust mechanism, it’s also efficient in terms of power consumption. So in this case it’s not GPS that provides the location, but wi-fi technology”, adds Parsons. 

The phone knows where the Wi-Fi is by using its own database of Wi-Fi hotspots and their locations.

This means there’s no need to use a network service to do that translation because this has in effect been cashed onto the device itself. However, there is an element of error correction happening behind the scenes if there’s a mismatch between a cell tower and a Wi-Fi hotspot, in case a stale Wi-Fi hotspot has moved.

Google’s visual positioning capability

But even with both Wi-Fi hotspots and GNSS technology, there remain other problems to resolve when determining one’s precise location.

This happens for example when travelling on a subway station in a large city. Then, a phone will be hidden away from GNSS because it’s been down in a tunnel, while a phone’s accelerometers and digital compass get impacted by RF radiations. “

The end result of this is when you come out of a metro station and you pull out your phone, it really struggles to identify where it is. And if it does identify where it is, the next issue is pose, where your phone is confused with being out of signals and giving you the wrong directions as a result of that”, says Parsons.

Google developed a solution for this problem called visual positioning.

Screenshot showing Google's visual positioning feature on Google Maps.
Google Maps can use your phone’s camera to analyze nearby features to improve the spatial accuracy of your location. Screenshot acquired 16-November-2021.

This is a new capability on Google Maps for Android that was rolled out recently in iOS, where a user can compare the picture that a phone is seeing with a database of Google’s Street View Imagery, in order to orientate the device for pedestrian navigation.

“This capability supplements GNSS and other technologies to not only give you the ‘blue dot’ to identify where you are, but also to identify which direction you’re heading, compare that to the environment around you and overlay simple augmented reality directions on top of that”. 

Screenshot showing the visual guidance capabilities in Google Maps.
Visual guidance system in Google Maps uses augmented reality to show which way to navigate. Screenshot acquired 16-November-2021.

Solving multipath problems with 3D city models

Another innovation that Google has been looking into is solving multipath problems, which refer to the signal from a GPS satellite that is bouncing off other structures and extending the time the signal needs to get back to the satellite, which leads to errors in precise location.

Using its detailed model of urban environments, Google is able to model the expected multipath errors of a given place beforehand and cancel them out.

“This is a mechanism that we’ve started rolling out if you bought a Pixel phone in the last year or so. If you’re in one of the major cities in the world, your location will be corrected for multipath using this city morphology model to try and correct for the multipath. It works to the extent that we will get you on the right side of the road to an actual location with a much higher level of likelihood.

To be able to do this, we try to cache as much of that content on a device as possible, so for the local area you’re likely to be in rather than the whole world in 3D on a phone”, adds Parsons. 

Precise positioning versus approximate positioning

Discussing the different innovations created by Google in the location technology space, Parsons distinguishes different use cases and user needs.

“We probably think too much about precise positioning where for most use cases we don’t need to do this. Particularly in IoT applications such as smart home applications, it is really just about proximity.

If you ask Alexa to switch on the lights in the bedroom, the device can work out where you are because it’s picked up your voice in that same room. All it needs to know is the different rooms are relative to each other to be able to work out where it needs to switch the lights on if you mention another room.

This is a different approach to location than storing x, y and z values in many numbers of decimal points.” 

Getting back to the concept of a fused location service, Google’s intent is to be able to give the user the most precise location that can be managed based on the circumstances and technology available.

However, from a developer point there is a level of control here, explains Parsons: “An application developer has to make a decision whether or not to use resources that might drain the battery more quickly in return for more precise positioning. At Google we try to make this process as transparent as possible.”

The power consumption decision automatically leads to privacy decision mechanisms, explains Parsons:

“While you don’t need to share your location when you’re using your flashlight on a mobile phone, things are different if you’re building an application for urban mobility and you’re navigating a city. But even then, you need to minimize the location data you collect, both in terms of how precise the location is, but also how frequently you collect the information.

Google doesn’t share location with any third party because of its sensitive nature, and in almost all cases, doesn’t store that location data associated with users unless they’ve specifically requested to do so.”

Related

Photo of author
About the author
Eric van Rees