Super-resolution Imaging

Mark Altaweel

Updated:

  • Super-resolution enhances images using deep learning artificial intelligence
  • The method can improve lower quality images for enhanced imaging

Super-resolution is an approach derived from computer vision methods that aims to improve the quality of an image by applying algorithms and improving image sampling using what is called upsampling. This helps the image improve both in appearance and applicability in finding desired objects. This has particular utility for remote sensing aiming to better identify items on a given image.

In this geospatial podcast episode , Markus Müller from Up42[1], discusses how super-resolution works as well as its benefits and limitations. 

Super-resolutions work by applying deep learning techniques, typically forms of convolution neural networks (CNNs), which are trained with sampling data and can then apply learned models to using existing resolution of pixels. The algorithm can then resample those pixels to then improve their quality. This is similar to pan sharpening, in some way, but the main difference is the resolution not only improves the overall quality of the data bands, but you can improve the original resolution.

The method can be applied to multiple band or single band images. However, unlike pan sharpening, the method can improve the overall quality of the top resolution band, where the resolution can be improved up to four times.[2]

Infographic from Mapscaping about super-resolution imaging.

We are seeing super-resolution being applied particularly for remote sensing data, such as Sentinel-2 imagery and multiple band Landsat imagery. The improvement in the result not only allow objects to be detected at up to four times improvement in the resolution of imagery, but this allows objects not easily identified to be made detectable and more evident for manual or automated object detection techniques. In effect, this takes sub-pixel level details and makes allows objects to be reconstructed into finer scales.

In computer vision methods, validating the methodology is critical to demonstrating its effectiveness for work such as used in remote sensing. One can obtain an image, downsample to reduce quality, then apply super-resolution techniques to then check how accurate the algorithm is in improving image quality. This has become the way to validate and compare the work to the empirical data.

As algorithms have increasingly demonstrated efficacy in super-resolution, realistically you will likely take an image, perhaps one already improved through pan sharpening, and then improve to a higher resolution.[3]

What is driving all of these developments is computer vision. Rather than the field being driven by remote sensing or even geospatial processes, the wider fields of data science and computer vision are driving methodology today. This is particularly the case with segmentation techniques and methods such as super-resolution deep learning approaches.

The process of using super-resolution in enhancing resolution in imagery.
The process of using super-resolution in enhancing resolution in imagery.

The Up42 algorithm is a good example of an algorithm that was influenced by computer vision. However, as Müller emphasises, it is important to remember that while super-resolution has benefits, it is not always appropriate to improve resolution. For instance, if you are interested in counting the number of objects in a scene, but have to take multiple samples of the same area, you have to keep in mind that multiple images may simply change in the number of objects present over time (e.g., cars moving on an image scene), skewing the samples.

Additionally, it may simply be better to get an improved resolution image if possible for a given scene, as up sampling and super-resolution imagery are not intended to replace higher resolution images but simply address the gap when data gathering is needed but improving a resolution is not easily possible. Data fusion might be the growing area in Earth observation and computer vision, which can allow multiple images from different resolutions to be merged. There could be further research in this area, including how super-resolution could be applied to data fused imagery.

What we see is that super-resolution has become a way for us to improve resolution on imagery for data sets where we may need to extract more data. This is particularly useful in remote sensing imagery, such as finding objects in lower resolution images. However, we have to keep in mind that the use case of super-resolution is limited to scenes that are relatively static and we have to realise that super-resolution is not intended to replace high resolution imagery but simply enhance what is present in existing data. 

References

[1]    For more on Up42, see:  https://up42.com/.

[2]    A good example of super-resolution and its application can be seen here: Gao, L., Hong, D., Yao, J., Zhang, B., Gamba, P., & Chanussot, J. (2021). Spectral Superresolution of Multispectral Imagery With Joint Sparse and Low-Rank Learning. IEEE Transactions on Geoscience and Remote Sensing59(3), 2269–2280. https://doi.org/10.1109/TGRS.2020.3000684.

[3]    For more on a super-resolution methods discussed by Müller, see: Müller, M. U., Ekhtiari, N., Almeida, R. M., & Rieke, C. (2020). SUPER-RESOLUTION OF MULTISPECTRAL SATELLITE IMAGES USING CONVOLUTIONAL NEURAL NETWORKS. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information SciencesV-1–2020, 33–40. https://doi.org/10.5194/isprs-annals-V-1-2020-33-2020 

Related

Photo of author
About the author
Mark Altaweel
Mark Altaweel is a Reader in Near Eastern Archaeology at the Institute of Archaeology, University College London, having held previous appointments and joint appointments at the University of Chicago, University of Alaska, and Argonne National Laboratory. Mark has an undergraduate degree in Anthropology and Masters and PhD degrees from the University of Chicago’s Department of Near Eastern Languages and Civilizations.