Opin vísindi

Sentinel-2 Image Fusion Using a Deep Residual Network

Sentinel-2 Image Fusion Using a Deep Residual Network

Title: Sentinel-2 Image Fusion Using a Deep Residual Network
Author: Palsson, Frosti   orcid.org/0000-0003-1017-0997
Sveinsson, Jóhannes Rúnar
Ulfarsson, Magnus   orcid.org/0000-0002-0461-040X
Date: 2018-08-15
Language: English
Scope: 1290
University/Institute: Háskóli Íslands
University of Iceland
School: Verkfræði- og náttúruvísindasvið (HÍ)
School of Engineering and Natural Sciences (UI)
Department: Rafmagns- og tölvuverkfræðideild (HÍ)
Faculty of Electrical and Computer Engineering (UI)
Series: Remote Sensing;10(8)
ISSN: 2072-4292
DOI: 10.3390/rs10081290
Subject: Residual neural network; Image fusion; Convolutional neural network; Sentinel-2; Myndgreining (upplýsingatækni)
URI: https://hdl.handle.net/20.500.11815/1290

Show full item record


Palsson F, Sveinsson JR, Ulfarsson MO. Sentinel-2 Image Fusion Using a Deep Residual Network. Remote Sensing. 2018; 10(8):1290. doi:10.3390/rs10081290


Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments.


Publisher's version (útgefin grein)


This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Files in this item

This item appears in the following Collection(s)