For blind and visually impaired individuals, GPS and navigation devices might be of little value, as the inputting of data and interpretation of results have generally required visual incorporation. However, over the last 20 years, technologies to improve voice-command GPS have been further developed. For some time, common GPS devices have included abilities to use simple voice command. However, an early relatively advanced tools was Drishti, a voice-activated navigation tool that included wearable technology, GPS, GIS, and wireless network technology incorporated. The system helped users anticipate where they may want to go based on memory and repeated use of the software in addition to having voice synthesis and analysis.[1]
Translating Natural Language into GIS and GPS Commands
The major challenge, even for relatively advanced tools, has been to develop devices that can process natural language information into intelligible information that can be processed to allow GIS calculations or GPS navigation.[2] Voice commands that operate under set commands such as “zoom in” have existed for some time and have been effective when the user has given the command in a clear voice. However, interpreting information that is not always easily explained remained a challenge, particularly in dealing with complex commands or complex queries such as locations that might not have fixable addresses. Furthermore, command, voice-based tools still require visual interpretation, making them more useful for some hand-free operations but not for visually impaired individuals.[3]

Later, a tool called BLI –NAV was developed exclusively for blind people as a navigation system.[4] Voice command is used for destinations, which can then be searched and a least cost pathway developed based on different factors. In some ways, this tool is simpler than Drishti, but was intended to be more easily utilized and user friendly in making it simple to provide simple commands where the tool could then find desired locations and pathways.
Voice Control Commands in GIS and GPS
In general, more recent systems have developed natural language where it can be broken into relevant command syntax and then command is translated into viable commands for navigation that is then given to the system for operations. This is done similarly to how SQL or other database queries are applied, except the voice command is translated into syntax that is machine-readable.[5]

Additionally, while initially voice command GPS and GIS were planned as tools to assist visually impaired people, recent legislation against distracted drivers and others has caused greater interest to develop GPS devices that are more capable based on voice command. For more recent GPS devices, simple functions that look for an address, for example, would be preceded by a command to “search.”[6] Even for drivers utilizing these hand-free tools, the main challenge continues to be to develop devices that can better handle natural language, including long sentences and descriptions, and incorporate speech recognition easily from different types of voices. Tools need to be able to recognize searchers that are based on some information the user may be aware of but the system needs to query results that are closest in machining a desired input. For example, a user may be aware of a desired restaurant but is not clear regarding the location. In effect, more developed voice activated GPS and GIS tools still have a way to go before they become sophisticated for location finding and spatial analysis.
Free weekly newsletter
Fill out your e-mail address to receive our newsletter!
By entering your email address you agree to receive our newsletter and agree with our privacy policy.
You may unsubscribe at any time.
References
[1] For more on the Drishti tool, see: Helal, A., S.E. Moore, and B. Ramachandran. 2001. “Drishti: An Integrated Navigation System for Visually Impaired and Disabled.” In , 149–56. IEEE Comput. Soc. doi:10.1109/ISWC.2001.962119.
[2] For more on the challenges on utilizing natural language techniques with GIS and navigation, see: Zhang, Ling, Yi Long, Chengyang Qian, Leidi Hu, and Guonian Lv. 2007. “The Application of Speech Technology in Mobile GIS.” In , edited by Peng Gong and Yongxue Liu, 67542L. doi:10.1117/12.765193.
[3] For more on an example, see: Chang, Po-Kuang, Jium Ming Lin, and Yuan Lo. 2006. “Integrated Smart Car Navigation and Voice Control System Design.” In , 760–65. IEEE. doi:10.1109/IECON.2006.347590.
[4] For more on BLI-NAV, see: Sai Santhosh, S., T. Sasiprabha, and R. Jeberson. 2010. “BLI – NAV Embedded Navigation System for Blind People.” In , 277–82. IEEE. doi:10.1109/RSTSCC.2010.5712859.
[5] For more on the process of taking natural language and applying as a computational search, see: Feng, Jiangfan, & Y. Liu. 2012. Wifi-based Indoor Navigation with Mobile GIS and Speech Recognition. IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 6, No 2, November 2012. Pp. 256-263.
[6] For more on voice controlled GPS devices and their commercial use, see: Dobesova, Zdena. 2012. “Voice Control of Maps.” In , 460–64. IEEE. doi:10.1109/TSP.2012.6256336.
Related
- GIS and Natural Language Processing
- GIS and Semantics: Enabling the Discoverability of Data
- Building Better Maps for the Visually Impaired
- Geographic Information System (GIS) for the Visually Impaired
- Navigation Through Sound
- 3D Maps for the Blind
- Developing Auditory Maps for the Blind