A challenge in the development community and researchers working in places with few reliable maps is to create accurate maps to utilize in projects that can derive informative data for policy makers and others. New work has attempted to combine artificial intelligence, particularly deep learning methods that apply multiple layers of data used for recursive learning, as well as data collected by volunteers or others shared data on popular platforms.
The integration of crowdsourcing and deep learning have the potential to make working in settlement areas easier for researchers and others alike.
Crowdsourcing Geospatial Data Gaps
In a recent paper, a methods that deploys DeepVGI, a classification method using multi-layer recursive learning, and crowdsourcing from geo-tagged images on MapSwipe, a popular map crowdsourcing app. In addition, data from OpenStreetMap and satellite imagery along with crowdsourced spatial data is used to fill in key gaps for areas not easily identified on imagery.
In this case, DeepVGI classifies urban and non-urban spaces and designates spaces into given categories. Crowdsourcing by those in communities helps to validate and identify other features not easily identified on satellite imagery.[1]

Another project has used geo-tagged field photographs, which are crowdsourced from popular user sites, and a convolutional neural networks (CNN) deep learning algorithm to improve land classification.
Free weekly newsletter
Fill out your e-mail address to receive our newsletter!
By entering your email address you agree to receive our newsletter and agree with our privacy policy.
You may unsubscribe at any time.
The CNN algorithm was utilized in extracting evident images, while a multinomial logistic regression is applied as a way to classify identified features. In this case, learning of images took advantage of the vast library now available for specific Earth Observation (see http://eomf.ou.edu/photos/).[2]
Interestingly, recent research has also suggested that we are at the point where crowdsourced photographs or street imagery could now potentially take place of high resolution social data gathered using traditional survey methods.
In such cases, photographs alone could, to a high degree of accuracy, be trained using deep neural network models that allow images to be associated with different levels of environmental and health inequalities. This enables photographs to potentially do some of the work that would normally take more time-consuming effort to build data through door-to-door visits.
The potential of this application could be very useful in places where field data are lacking or even if the situation is too dangerous to apply common field survey.[3]

Machine learning is also used with popular social media platforms such as Twitter, which is used as a type of crowdsourcing effort, that provides tagged and spatially referenced data about urban environments and details about given areas within cities not easily identified on imagery.
Spatially tagged Twitter data from Los Angeles were mapped and analyzed with spatial imagery of the urban environment. Here, the technique used machine learning to determine spatial patterns about urban spaces, how buildings are seen or used, with building classification derived from social media, made possible through convultional neural network methods applied to data. Nevertheless, the effort proved challenging at different levels, including on who chooses to use social media.[4]
What recent work has shown is that increasingly machine learning techniques are being applied to crowdsourced data to find new urban and land use patterns not previously possible or difficult to determine.
Whether it is deriving information such as on health inequality simply using photographs or even finding social designations and categorization of building use patterns from Twitter, big data that can be crowdsourced and provide informative data patterns from new and social platforms of data gathering promise to enable many new urban and land use insights that can enrich our spatial understanding in areas such as development and health.
References
[1] For more on integrating crowdsourcing and DeepVGI, see: Herfort, Benjamin, Hao Li, Sascha Fendrich, Sven Lautenbach, and Alexander Zipf. “Mapping Human Settlements with Higher Accuracy and Less Volunteer Efforts by Combining Crowdsourcing and Deep Learning.” Remote Sensing11, no. 15 (July 31, 2019): 1799. https://doi.org/10.3390/rs11151799.
[2] For more on using geo-tagged crowdsourced data and deep learning CNN algorithms, see: Xu, Guang, Xuan Zhu, Dongjie Fu, Jinwei Dong, and Xiangming Xiao. “Automatic Land Cover Classification of Geo-Tagged Field Photos by Deep Learning.” Environmental Modelling & Software91 (May 2017): 127–34. https://doi.org/10.1016/j.envsoft.2017.02.004.
[3] For more on using street photographs and measuring inequalities, see: Suel, Esra, John W. Polak, James E. Bennett, and Majid Ezzati. “Measuring Social, Environmental and Health Inequalities Using Deep Learning and Street Imagery.” Scientific Reports9, no. 1 (December 2019): 6229. https://doi.org/10.1038/s41598-019-42036-w.
[4] For more on integrating social media with spatial urban data, see: Häberle, Matthias, Martin Werner, and Xiao Xiang Zhu. “Geo-Spatial Text-Mining from Twitter – a Feature Space Analysis with a View toward Building Classification in Urban Regions.” European Journal of Remote Sensing52, no. sup2 (August 9, 2019): 2–11. https://doi.org/10.1080/22797254.2019.1586451.