While most people are now familiar with Google Earth and similar tools that provide a modern view of our planet using satellite imagery, questions such as how has our planet changed in some places over centuries or how did ancient or old places actually look like were hard to answer outside of using sometimes abstract or stylized maps.
Recent work not only begins to demonstrate, using artificial intelligence, how different regions have changed, but old maps and images can now be converted to make them similar to our modern satellite-based views, giving stylized views a more realistic appearance.
One new tool, called Pix2Pix, can take old maps or even hand-drawn images and apply two different neural network models that convert the sketch or map into a satellite-like image.

Additionally, it can check to see that the image is authentic and determine if the construction is more fake or more realistic in appearance as a form of quality check.[1]
The technology is based on conditional adversarial networks that is a form of general-purpose solution where it applies image-to-image translation. The tool can be applied to paintings, drawings, and even sketches. Objects drawn, painted, or depicted are then converted to comparable images from known examples that can be used to train the conversion.
Free weekly newsletter
Fill out your e-mail address to receive our newsletter!
By entering your email address you agree to receive our newsletter and agree with our privacy policy.
You may unsubscribe at any time.
For instance, buildings are identified as buildings and recreated into structures that appear as they would on satellite imagery.[2]

Related to this tool, advancements in central biasing normalization techniques helps to improve the range of images and possible image-to-image conversion that could be used to generate imagery-like maps.[3]

Other tools are now appearing that also attempt to use similar deep neural network methods to convert images. Cycle-consistent adversarial networks have been also shown to be effective not only in image-to-image conversion but the method even works well without having paired examples for training purposes.
The tool, called CycleGAN, has been used to even take old maps and pictures to convert to reality-like imagery for ancient cities such as Babylon and Jerusalem.[4]
Another similar tool, StarGAN, uses a generative adversarial network and it has the potential benefit where images can be applied not only for mapping but for different domains, giving it flexibility in finding relevant classification and conversion of images to realistic looking appearances, including for maps or general imagery.[5]
Overall, these tools show the utility of using generative adversarial networks in creating unstylized data from even stylized information, including tools that can be trained to use raw raster map data that can then be converted to vector data in an automated process.
What research has shown is that generative adversarial networks have distinct advantages for multi-scale map style transferring, but for some types of mapping endeavors such image-to-object conversion is still challenging.[6]
Over the last five years, a variety of image-to-image tools have been created which can take sketches, drawings, and paintings and convert them to picture-like quality, including converting to satellite-like images. These advancements could enable ancient or old maps to take on realistic appearances.
Of course, one problem is many ancient or old maps are highly stylized or not drawn to a given scale. These conversions can be challenging depending on how abstract a given image is and if there are comparable real-life examples of a given stylized image.
Nevertheless, these recent tools offer many benefits for those interested in understanding how given regions have changed over time and using old maps and drawings to demonstrate this by comparing to more recent imagery.
References
[1] The tool, Pix2Pix, can be found here: https://ml4a.github.io/guides/Pix2Pix/.
[2] The Pix2Pix tool’s algorithm is discussed in detail here: https://arxiv.org/abs/1611.07004.
[3] For more on the technique to assisting image-to-image conversion, see: https://arxiv.org/abs/1806.10050.
[4] For more on using cycle-consistent adversarial networks, see: Zhu, J.-Y., Park, T., Isola, P., Efros, A.A., 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks, in: 2017 IEEE International Conference on Computer Vision (ICCV). Presented at the 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, Venice, pp. 2242–2251. https://doi.org/10.1109/ICCV.2017.244.
[5] For more on StarGAN, see: Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J., 2018. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Presented at the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Salt Lake City, UT, pp. 8789–8797. https://doi.org/10.1109/CVPR.2018.00916.
[6] For more on generative adversarial networks that can be used to create vector map elements from raster data, see: Kang, Y., Gao, S., Roth, R.E., 2019. Transferring multiscale map styles using generative adversarial networks. International Journal of Cartography 5, 115–141. https://doi.org/10.1080/23729333.2019.1615729.
Related
- Techniques for Extracting Geospatial Data from Historical Maps
- Deepfake Satellite Imagery
- Using Ground-level Imagery to Map Landscape Change
- Creating Ground-level Views from Satellite Imagery
- Using Repeat Photography to Capture Landscape Change
- GIS and Modern Research from Historical Maps
- Himalayan Glacier Melt Mapped By Analyzing Old Spy Photographs