Sentinel-2 Image Fusion Using a Deep Residual Network

dc.contributorHáskóli Íslandsen_US
dc.contributorUniversity of Icelanden_US
dc.contributor.authorPalsson, Frosti
dc.contributor.authorSveinsson, Jóhannes Rúnar
dc.contributor.authorUlfarsson, Magnus
dc.contributor.departmentRafmagns- og tölvuverkfræðideild (HÍ)en_US
dc.contributor.departmentFaculty of Electrical and Computer Engineering (UI)en_US
dc.contributor.schoolVerkfræði- og náttúruvísindasvið (HÍ)en_US
dc.contributor.schoolSchool of Engineering and Natural Sciences (UI)en_US
dc.date.accessioned2019-10-03T11:27:44Z
dc.date.available2019-10-03T11:27:44Z
dc.date.issued2018-08-15
dc.descriptionPublisher's version (útgefin grein)en_US
dc.description.abstractSingle sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments.en_US
dc.description.sponsorshipThis research was funded in part by The Icelandic Research Fund grant number 174075-05.en_US
dc.description.versionPeer Revieweden_US
dc.format.extent1290en_US
dc.identifier.citationPalsson F, Sveinsson JR, Ulfarsson MO. Sentinel-2 Image Fusion Using a Deep Residual Network. Remote Sensing. 2018; 10(8):1290. doi:10.3390/rs10081290en_US
dc.identifier.doi10.3390/rs10081290
dc.identifier.issn2072-4292
dc.identifier.journalRemote Sensingen_US
dc.identifier.urihttps://hdl.handle.net/20.500.11815/1290
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.relation.ispartofseriesRemote Sensing;10(8)
dc.relation.urlhttp://www.mdpi.com/2072-4292/10/8/1290/pdfen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectResidual neural networken_US
dc.subjectImage fusionen_US
dc.subjectConvolutional neural networken_US
dc.subjectSentinel-2en_US
dc.subjectMyndgreining (upplýsingatækni)en_US
dc.titleSentinel-2 Image Fusion Using a Deep Residual Networken_US
dc.typeinfo:eu-repo/semantics/articleen_US
dcterms.licenseThis article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).en_US

Skrár

Original bundle

Niðurstöður 1 - 1 af 1
Hleð...
Thumbnail Image
Nafn:
remotesensing-10-01290.pdf
Stærð:
14 MB
Snið:
Adobe Portable Document Format
Description:
Publisher´s version (útgefin grein)

Undirflokkur