Deepfake Satellite Imagery

Mark Altaweel

Updated:

In areas related to satellite imagery, deepfake imagery is becoming a problem for individuals and organizations alike. There is a growing worry that satellite imagery will be used for nefarious purposes using deep learning techniques and fake imagery.[1]

What is Deepfake Imagery?

Deepfakes are media created synthetically to alter or change the appearance of an image, often replacing one scene for another. These alternations are generated by machine learning, and typically deep learning neural network techniques, that modify an image.

How is Deepfake Satellite Imagery Created?

One common set of methods used for deepfakes include generative adversarial networks (GANs). One technique within this family of techniques is Cycle-Consistent Adversarial Networks (CycleGAN), that has been one approach seen to have been used for deepfake satellite imagery creation.

This algorithm applies a neural network that automatically trains image-to-image models that translate x and y location without paired training examples. It is an unsupervised technique that uses a collection of images that finds comparable images and can replace an image of a location with another location, while still maintaining a realistic appearance for the entire scene. For instance, common applications include transforming one animal or person with another similar looking animal or person (e.g., replacing a horse and zebra in an image).[2] 

The top two panels show a real map and satellite imagery of a neighborhood in Tacoma, Washington.  The bottom two panels are simulated satellite imagery generated using geospatial data of Seattle, Washington (c) and and Beijing, China (d). Figure: Zhao et al., 2021, Cartography and Geographic Information Science
The top two panels show a real map and satellite imagery of a neighborhood in Tacoma, Washington. The bottom two panels are simulated satellite imagery generated using geospatial data of Seattle, Washington (c) and and Beijing, China (d). Figure: Zhao et al., 2021, Cartography and Geographic Information Science via UW News.

Such algorithms are known to be used for deepfakes that can replace varied locations found sometimes in high resolution satellite imagery. While such algorithms are relatively new, for satellite imagery and map generation in general, deepfakes are part of long-lasting trend of manipulating geographic data. In other words, incentives for creating deepfakes are not new, even if the techniques now used are relatively recently developed.



Free weekly newsletter

Fill out your e-mail address to receive our newsletter!
Email:  

By entering your email address you agree to receive our newsletter and agree with our privacy policy.
You may unsubscribe at any time.



Detecting Deepfake Satellite Imagery

Deepfakes are not just used to poke fun of people or organizations but they are seen as a potential threat to countries and their security. To counter deepfakes, algorithms have been created to detect where images might have been manipulated.

One technique is called common fake feature network (CFFN), often used along with standard convolutional neural networks (CNN), which detect potential discriminative features using pairwise learning that suggest potential change or alternation to an image.[3]  Other techniques include spatial feature extraction, using SSTNet, that applies feature extraction and temporal extraction as well for changing imagery.[4] 

In general, researchers have been countering deepfake neural network models with other neural networks that can, at times, reverse engineer or at least detect shapes and pixels that are altered from one frame to another or simply detect alternations that vary from the expected. Some techniques even look for changes to noise or other artefacts common in imagery that might be missing or altered in deepfakes. While many of these techniques have been applied to image and video content online, researchers see that they are useful for satellite imagery as well.[5]

The image of India on the right is an example of misappropriation of satellite imagery.  The image was created from the satellite imagery on the left which shows night lights in India taken in 2003.  NOAA manipulated the image to be an RGB Composite of Nighttime Lights Change with red representing 2003 and green representing 1992.  The image has since made the rounds periodically on social media purporting to be an image of India on Diwali night.
The image of India on the right is an example of a misrepresentation of a satellite image. The image was created from the satellite imagery on the left which shows night lights in India taken in 2003. NOAA color-coded the image as an RGB Composite of Nighttime Lights Change with red representing when lights became visible in 2003 and green representing when lights became visible in 1998. White areas are lights visible before 1992, and blue represents lights visible in 1992. While the original image was not created to be a deep fake satellite image, the image has since made the rounds periodically on social media purporting to be an image of India on Diwali night.

Although techniques are now emerging to counter deepfakes, the evolution of deepfake and counter deepfake algorithms demonstrates a type of arms race is emerging that may mean we see ever more sophisticated deepfake techniques and more sophisticated methods to counter act this. Neural network models can be created to be more sophisticated by creating additional layers that alter or detect given features, meaning neural networks are well suited algorithms to be made into more complex machinery that both counteracts and creates deepfakes. So long as incentives are strong for creating potentially harmful or deceptive imagery, the likelihood is we will continue to see deepfake satellite imagery used for a variety of reasons.

Deepfake images have become almost ubiquitous online, so much so we often are not sure what we are looking at and if it is even real or computer generated. For satellite imagery, this has also been true, continuing a long-lasting trend of geospatial manipulation. Thankfully, methods are now available to counter act common deepfake neural networks; however, the likelihood is we are seeing just the beginning of deepfake algorithms applied on geospatial data. We should not expect to see the end of deepfake satellite imagery any time soon, even if there are now better counter actions to detect deepfake imagery.

References

[1]    For a general background on deepfakes, see:  https://www.washington.edu/news/2021/04/21/a-growing-problem-of-deepfake-geography-how-ai-falsifies-satellite-images/.

[2]    For more on how CycleGAN neural network models can be used for deepfakes, see:  Zhao, B., Zhang, S., Xu, C., Sun, Y., & Deng, C. (2021). Deep fake geography? When geospatial data encounter Artificial Intelligence. Cartography and Geographic Information Science, 1–15. https://doi.org/10.1080/15230406.2021.1910075.

[3]    For more on a multi-step deepfake detection technique, see:  Hsu, C.-C., Zhuang, Y.-X., & Lee, C.-Y. (2020). Deep Fake Image Detection Based on Pairwise Learning. Applied Sciences10(1), 370. https://doi.org/10.3390/app10010370.

[4]    For more on the SSTNet algorithm, see:  Katarya, R., & Lal, A. (2020). A Study on Combating Emerging Threat of Deepfake Weaponization. In 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) (pp. 485–490). Presented at the 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India: IEEE. https://doi.org/10.1109/I-SMAC49090.2020.9243588.

[5]    For more on deepfake detection using neural network techniques, see:  Deshmukh, A., & Wankhade, S. B. (2021). Deepfake Detection Approaches Using Deep Learning: A Systematic Review. In V. E. Balas, V. B. Semwal, A. Khandare, & M. Patil (Eds.), Intelligent Computing and Networking (Vol. 146, pp. 293–302). Singapore: Springer Singapore. https://doi.org/10.1007/978-981-15-7421-4_27.

Related

Photo of author
About the author
Mark Altaweel
Mark Altaweel is a Reader in Near Eastern Archaeology at the Institute of Archaeology, University College London, having held previous appointments and joint appointments at the University of Chicago, University of Alaska, and Argonne National Laboratory. Mark has an undergraduate degree in Anthropology and Masters and PhD degrees from the University of Chicago’s Department of Near Eastern Languages and Civilizations.