Opin vísindi

SegAE: Unsupervised white matter lesion segmentation from brain MRIs using a CNN autoencoder

Skoða venjulega færslu

dc.contributor Háskóli Íslands
dc.contributor University of Iceland
dc.contributor.author Atlason, Hans
dc.contributor.author Love, Askell
dc.contributor.author Sigurdsson, Sigurdur
dc.contributor.author Gudnason, Vilmundur
dc.contributor.author Ellingsen, Lotta María
dc.date.accessioned 2020-06-18T11:30:15Z
dc.date.available 2020-06-18T11:30:15Z
dc.date.issued 2019
dc.identifier.citation Atlason, H. E., et al. (2019). "SegAE: Unsupervised white matter lesion segmentation from brain MRIs using a CNN autoencoder." NeuroImage: Clinical 24: 102085.
dc.identifier.issn 2213-1582
dc.identifier.uri https://hdl.handle.net/20.500.11815/1894
dc.description Publisher's version (útgefin grein)
dc.description.abstract White matter hyperintensities (WMHs) of presumed vascular origin are frequently observed in magnetic resonance images (MRIs) of the elderly. Detection and quantification of WMHs is important to help doctors make diagnoses and evaluate prognosis of their elderly patients, and once quantified, these can act as biomarkers in clinical research studies. Manual delineation of WMHs can be both time-consuming and inconsistent, hence, automatic segmentation methods are often preferred. However, fully automatic methods can be challenging to construct due to the variability in lesion load, placement of lesions, and voxel intensities. Several state-of-the-art lesion segmentation methods based on supervised Convolutional Neural Networks (CNNs) have been proposed. These approaches require manually delineated lesions for training the parameters of the network. Here we present a novel approach for WMH segmentation using a CNN trained in an unsupervised manner, by reconstructing multiple MRI sequences as weighted sums of segmentations of WMHs and tissues present in the images. After training, our method can be used to segment new images that are not part of the training set to provide fast and robust segmentation of WMHs in a matter of seconds per subject. Comparisons with state-of-the-art WMH segmentation methods evaluated on ground truth manual labels from two distinct data sets and six different scanners indicate that the proposed method works well at generating accurate WMH segmentations without the need for manual delineations.
dc.description.sponsorship This work was supported by RANNIS (The Icelandic Centre for Research ) through grant 173942-051 . We thank Burkni Palsson for a valuable discussion about hyperspectral unmixing using a neural network autoencoder, and Nicholas J. Tustison for valuable insights on N4 bias correction on FLAIR images with WMHs.
dc.format.extent 102085
dc.language.iso en
dc.publisher Elsevier BV
dc.relation.ispartofseries NeuroImage: Clinical;24
dc.rights info:eu-repo/semantics/openAccess
dc.subject Autoencoder
dc.subject Brain
dc.subject CNN
dc.subject Deep learning
dc.subject Segmentation
dc.subject White matter hyperintensity
dc.subject Heilinn
dc.subject Myndgreining (læknisfræði)
dc.title SegAE: Unsupervised white matter lesion segmentation from brain MRIs using a CNN autoencoder
dc.type info:eu-repo/semantics/article
dcterms.license This is an open access article under the CC BY license (http://creativecommons.org/licenses/BY/4.0/).
dc.description.version Peer Reviewed
dc.identifier.journal NeuroImage: Clinical
dc.identifier.doi 10.1016/j.nicl.2019.102085
dc.relation.url https://www.sciencedirect.com/science/article/pii/S2213158219304322?via%3Dihub
dc.contributor.department Rafmagns- og tölvuverkfræðideild (HÍ)
dc.contributor.department Faculty of Electrical and Computer Engineering (UI)
dc.contributor.department Læknadeild (HÍ)
dc.contributor.department Faculty of Medicine (UI)
dc.contributor.school Verkfræði- og náttúruvísindasvið (HÍ)
dc.contributor.school School of Engineering and Natural Sciences (UI)
dc.contributor.school Heilbrigðisvísindasvið (HÍ)
dc.contributor.school School of Health Sciences (UI)


Skrár

Þetta verk birtist í eftirfarandi safni/söfnum:

Skoða venjulega færslu