Learning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data

dc.contributorHáskóli Íslandsen_US
dc.contributorUniversity of Icelanden_US
dc.contributor.authorHong, Danfeng
dc.contributor.authorChanussot, Jocelyn
dc.contributor.authorYokoya, Naoto
dc.contributor.authorKang, Jian
dc.contributor.authorZhu, Xiao Xiang
dc.contributor.departmentRafmagns- og tölvuverkfræðideild (HÍ)en_US
dc.contributor.departmentFaculty of Electrical and Computer Engineering (UI)en_US
dc.contributor.schoolVerkfræði- og náttúruvísindasvið (HÍ)en_US
dc.contributor.schoolSchool of Engineering and Natural Sciences (UI)en_US
dc.date.accessioned2020-12-07T13:58:46Z
dc.date.available2020-12-07T13:58:46Z
dc.date.issued2020-08
dc.descriptionPublisher's version (útgefin grein)en_US
dc.description.abstractDue to the ever-growing diversity of the data source, multimodality feature learning has attracted more and more attention. However, most of these methods are designed by jointly learning feature representation from multimodalities that exist in both training and test sets, yet they are less investigated in the absence of certain modality in the test phase. To this end, in this letter, we propose to learn a shared feature space across multimodalities in the training process. By this way, the out-of-sample from any of multimodalities can be directly projected onto the learned space for a more effective cross-modality representation. More significantly, the shared space is regarded as a latent subspace in our proposed method, which connects the original multimodal samples with label information to further improve the feature discrimination. Experiments are conducted on the multispectral-Light Detection and Ranging (LIDAR) and hyperspectral data set provided by the 2018 IEEE GRSS Data Fusion Contest to demonstrate the effectiveness and superiority of the proposed method in comparison with several popular baselines.en_US
dc.description.sponsorshipThis work was supported in part by the German Research Foundation (DFG) under Grant ZH 498/7-2, in part by the Helmholtz Association under the framework of the Young Investigators Group SiPEO (VH-NG-1018), and in part by the European Research Council (ERC) under the European Unions Horizon 2020 Research and Innovation Program (Grant agreement No. ERC-2016-StG-714087, Acronym: So2Sat). The work of Naoto Yokoya was supported by the Japan Society for the Promotion of Science (KAKENHI) under Grant 18K18067. (Corresponding author: Xiao Xiang Zhu.) Danfeng Hong and Xiao Xiang Zhu are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany, and also with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany (e-mail: danfeng.hong@dlr.de; xiaoxiang.zhu@dlr.de).en_US
dc.description.versionPeer Revieweden_US
dc.format.extent1470-1474en_US
dc.identifier.citationHong, D., Chanussot, J., Yokoya, N., Kang, J., Zhu, X.X., 2020. Learning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data. IEEE Geoscience and Remote Sensing Letters. doi:10.1109/lgrs.2019.2944599en_US
dc.identifier.doi10.1109/LGRS.2019.2944599
dc.identifier.issn1545-598X
dc.identifier.issn1558-0571 (eISSN)
dc.identifier.journalIEEE Geoscience and Remote Sensing Lettersen_US
dc.identifier.urihttps://hdl.handle.net/20.500.11815/2275
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/714087en_US
dc.relation.ispartofseriesIEEE Geoscience and Remote Sensing Letters;17(8)
dc.relation.urlhttp://xplorestaging.ieee.org/ielx7/8859/9145892/08976086.pdf?arnumber=8976086en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectCross-modalityen_US
dc.subjectFeature learningen_US
dc.subjectHyperspectralen_US
dc.subjectMultimodalityen_US
dc.subjectMultispectral-Light Detection and Rangingen_US
dc.subjectShared subspace learningen_US
dc.subjectFjarkönnunen_US
dc.titleLearning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Dataen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dcterms.licenseThis work is licensed under a Creative Commons Attribution 4.0 License. Formore information, see https://creativecommons.org/licenses/by/4.0/en_US

Skrár

Original bundle

Niðurstöður 1 - 1 af 1
Hleð...
Thumbnail Image
Nafn:
Hong-2020-Learning-shared-cross-modality-repr.pdf
Stærð:
3.82 MB
Snið:
Adobe Portable Document Format
Description:
Publisher´s version

Undirflokkur