Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Other Publications
    • ajnr

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

Welcome to the new AJNR, Updated Hall of Fame, and more. Read the full announcements.


AJNR is seeking candidates for the position of Associate Section Editor, AJNR Case Collection. Read the full announcement.

 

Research ArticleAdult Brain
Open Access

Unsupervised Deep Learning for Stroke Lesion Segmentation on Follow-up CT Based on Generative Adversarial Networks

H. van Voorst, P.R. Konduri, L.M. van Poppel, W. van der Steen, P.M. van der Sluijs, E.M.H. Slot, B.J. Emmer, W.H. van Zwam, Y.B.W.E.M. Roos, C.B.L.M. Majoie, G. Zaharchuk, M.W.A. Caan and H.A. Marquering on behalf of the CONTRAST Consortium Collaborators
American Journal of Neuroradiology August 2022, 43 (8) 1107-1114; DOI: https://doi.org/10.3174/ajnr.A7582
H. van Voorst
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
bBiomedical Engineering and Physics (H.v.V., P.R.K., L.M.v.P., M.W.A.C., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for H. van Voorst
P.R. Konduri
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
bBiomedical Engineering and Physics (H.v.V., P.R.K., L.M.v.P., M.W.A.C., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
L.M. van Poppel
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
bBiomedical Engineering and Physics (H.v.V., P.R.K., L.M.v.P., M.W.A.C., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for L.M. van Poppel
W. van der Steen
dDepartments of Neurology (W.v.d.S., P.M.v.d.S.)
eRadiology and Nuclear Medicine (W.v.d.S., P.M.v.d.S.), Erasmus University Medical Center, Rotterdam, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for W. van der Steen
P.M. van der Sluijs
dDepartments of Neurology (W.v.d.S., P.M.v.d.S.)
eRadiology and Nuclear Medicine (W.v.d.S., P.M.v.d.S.), Erasmus University Medical Center, Rotterdam, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
E.M.H. Slot
fDepartment of Neurology and Neurosurgery (E.M.H.S.), University Medical Center Utrecht, Utrecht, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
B.J. Emmer
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for B.J. Emmer
W.H. van Zwam
gDepartment of Radiology and Nuclear Medicine (W.H.v.Z.), Maastricht University Medical Center, Maastricht, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for W.H. van Zwam
Y.B.W.E.M. Roos
cNeurology (Y.B.W.E.M.R.), Faculty of Medicine, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Y.B.W.E.M. Roos
C.B.L.M. Majoie
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for C.B.L.M. Majoie
G. Zaharchuk
hDepartment of Radiology (G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for G. Zaharchuk
M.W.A. Caan
bBiomedical Engineering and Physics (H.v.V., P.R.K., L.M.v.P., M.W.A.C., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M.W.A. Caan
H.A. Marquering
aFrom the Departments of Radiology and Nuclear Medicine (H.v.V., P.R.K., L.M.v.P., B.J.E., C.B.L.M.M., H.A.M.)
bBiomedical Engineering and Physics (H.v.V., P.R.K., L.M.v.P., M.W.A.C., H.A.M.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for H.A. Marquering
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: Supervised deep learning is the state-of-the-art method for stroke lesion segmentation on NCCT. Supervised methods require manual lesion annotations for model development, while unsupervised deep learning methods such as generative adversarial networks do not. The aim of this study was to develop and evaluate a generative adversarial network to segment infarct and hemorrhagic stroke lesions on follow-up NCCT scans.

MATERIALS AND METHODS: Training data consisted of 820 patients with baseline and follow-up NCCT from 3 Dutch acute ischemic stroke trials. A generative adversarial network was optimized to transform a follow-up scan with a lesion to a generated baseline scan without a lesion by generating a difference map that was subtracted from the follow-up scan. The generated difference map was used to automatically extract lesion segmentations. Segmentation of primary hemorrhagic lesions, hemorrhagic transformation of ischemic stroke, and 24-hour and 1-week follow-up infarct lesions were evaluated relative to expert annotations with the Dice similarity coefficient, Bland-Altman analysis, and intraclass correlation coefficient.

RESULTS: The median Dice similarity coefficient was 0.31 (interquartile range, 0.08–0.59) and 0.59 (interquartile range, 0.29–0.74) for the 24-hour and 1-week infarct lesions, respectively. A much lower Dice similarity coefficient was measured for hemorrhagic transformation (median, 0.02; interquartile range, 0–0.14) and primary hemorrhage lesions (median, 0.08; interquartile range, 0.01–0.35). Predicted lesion volume and the intraclass correlation coefficient were good for the 24-hour (bias, 3 mL; limits of agreement, −64−59 mL; intraclass correlation coefficient, 0.83; 95% CI, 0.78–0.88) and excellent for the 1-week (bias, −4 m; limits of agreement,−66−58 mL; intraclass correlation coefficient, 0.90; 95% CI, 0.83–0.93) follow-up infarct lesions.

CONCLUSIONS: An unsupervised generative adversarial network can be used to obtain automated infarct lesion segmentations with a moderate Dice similarity coefficient and good volumetric correspondence.

ABBREVIATIONS:

AIS
acute ischemic stroke
BL
baseline
DSC
Dice similarity coefficient
FU
follow-up
FU2BL-GAN
follow-up to baseline generative adversarial network
GAN
generative adversarial network
24H
24 hours
HT
hemorrhagic transformation
ICC
intraclass correlation coefficient
IQR
interquartile range
L1-loss
voxelwise absolute difference between generated baseline and real baseline NCCTs
L1+adv
L1 and adversarial loss
LoA
limits of agreement
nnUnet
no new Unet
PrH
primary hemorrhagic lesions
1W
1 week

Hemorrhagic transformation (HT) and malignant cerebral edema are severe complications after acute ischemic stroke (AIS), which frequently result in functional deterioration and death.1⇓-3 Computer-guided visualization and segmentation of these hemorrhagic and infarct lesions can assist radiologists in detecting small lesions.4,5 Furthermore, lesion volume computed from a segmentation predicts long-term functional outcome3,6 and can be used to guide additional treatment such as decompressive craniectomy.7 Compared with an AIS baseline (BL) NCCT, follow-up (FU) NCCT imaging of a hemorrhagic lesion is characterized by an attenuation increase, while infarct lesions are characterized by an attenuation decrease.8,9 This attenuation change between NCCT scans can be exploited by specific deep learning algorithms to identify tissue changes and, in turn, can be used to obtain lesion segmentations.

Supervised deep learning with convolutional neural networks is the state-of-the-art computer-guided method for volumetric segmentation of hemorrhagic and infarct lesions in NCCT.4,5,8⇓-10 The supervised part in the case of segmentation refers to the use of human-, often an expert radiologist, guided annotations per voxel that represent the ground truth of the lesion on NCCT. These annotations are subsequently used to optimize a convolutional neural network for automated segmentation.11 However, acquiring manual annotations is time-consuming and is subject to intra- and interrater variability. As a result, it is difficult to create large data sets with comprehensive ground truth annotations. This issue is a challenge for the training of supervised deep learning models, affecting the performance and generalizability of these models.

Generative adversarial networks (GANs) are a type of deep learning model that can be used to generate new images or transform existing images.12,13 Because a GAN is optimized without an explicitly defined ground truth, such as manual lesion annotations, it is considered an unsupervised deep learning method. Recently, Baumgartner et al14 introduced the use of a GAN to transform an MR image of a patient with symptoms to a scan before symptom onset of Alzheimer disease. From this transformation, structural maps were extracted to visually represent pathologic changes relative to a generated BL MR imaging scan without Alzheimer disease.14 Such structural pathology maps could subsequently be used to segment and quantify the pathologic changes.

The aim of this study was to accurately segment stroke lesions on follow-up NCCT scans with a GAN trained in an unsupervised manner. In line with Baumgartner et al,14 we developed a GAN to remove hemorrhagic and ischemic stroke lesions from follow-up NCCT scans by generating difference maps with a lesion and BL NCCT scans without a lesion.

MATERIALS AND METHODS

GANs for BL NCCT Generation

The GAN structure as adopted in this study consists of 2 competing deep learning models, referred to as generator and discriminator models. The generator model generates artificial images, while the discriminator model tries to distinguish the generated artificial image from the original images.12,13 In this study, the generator receives as input a follow-up NCCT scan with the lesion and generates a difference map. This difference map is subtracted from the input follow-up NCCT scan with the lesion to generate an artificial BL NCCT scan without the lesion. Because an infarct lesion is visually subtle in AIS BL NCCTs acquired in the acute stage (0–6 hours after symptom onset), the transformation from a follow-up scan at 24 hours (24H) or 1 week (1W) with a well-defined lesion to a BL scan entails essentially the removal of the lesion. Subsequently, the discriminator model classifies the presented images as being either an original BL or a generated BL NCCT. This classification is used to provide feedback to the generator model and to optimize the difference map.12,13 The generated difference map is expected to have high positive values at the location of a hemorrhagic lesion and negative values at the location of an infarct lesion on a follow-up NCCT. Similarly, the attenuation change between BL and follow-up NCCT is positive in the case of a HT and negative if edema or brain tissue necrosis occurs in the infarct lesion. Thresholding of the generated difference map values can then be used to obtain a lesion segmentation.

The proposed GAN method is optimized with 2 types of loss functions: the voxelwise absolute difference between generated BL and real BL NCCTs (L1-loss) and the binary cross-entropy of the discriminator (adversarial-loss) for classifying generated and real BL NCCTs.12,13 Figure 1 presents the GAN model architecture we refer to as the follow-up to BL GAN (FU2BL-GAN).

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

The FU2BL-GAN global architecture (asterisk). The follow-up (FU) NCCT with lesion is clipped between Hounsfield unit ranges of 0−100 and 100−1000 and normalized to (−1) (double asterisks). The original BL NCCT is only clipped between 0 and 100 HU and normalized to (−1). The FU NCCT with a lesion is passed through the generator network to compute a difference map. This difference map is subtracted from the input FU NCCT to construct a generated BL NCCT. Original BL and generated BL are optimized on the basis of the absolute voxelwise difference (L1-loss) and the binary cross-entropy loss (adversarial-loss) of the discriminator networks classification (original or generated BL).

Patient Populations

In this study, 820 patients were included between January 2018 and July 2021 in the training data set from the MR CLEAN-NO-IV (n = 297), MR CLEAN-MED (n = 377), and MR CLEAN-LATE (n = 146) randomized controlled trials if BL and follow-up NCCTs were available. Specific imaging protocols, inclusion, and exclusion criteria of each of these randomized controlled trials have been published previously.15⇓-17 Scans of these 820 patients were used to train the FU2BL-GAN. NCCT scans with lesion annotation from previously published studies by Konduri et al18 and Hssayeni et al19 were used to construct 4 randomly split dedicated validation and test sets (depicted in Fig 2): ischemic stroke lesions between 8- and 72-hour (24H infarct; N validation: 46; N test: 141) and 72-hour and 2-week (1W infarct; N validation: 46; N test: 141) follow-up after endovascular treatment or randomization; hemorrhagic transformation lesions after AIS (HT; N validation: 19; N test: 57);20 and primary hemorrhagic lesions (PrH; N validation: 11; N test: 24). The data from Konduri et al was originally included in the MR CLEAN trial between December 2010 and March 2014.21 In compliance with the declaration of Helsinki, informed consent has been received for the use of data for substudies from patients included in the training data randomized controlled trials and the validation and test data of ischemic and HT lesions.15⇓⇓-18,20 The PrH data from Hssayeni et al19 was accessed through physionet.org and obtained with a “Restricted Health Data License 1.5.0,” because the authors stated that collection and sharing of the retrospectively collected anonymized and defaced CTs were authorized by the Iraq Ministry of Health Ethics board.

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Patients included in the training, validation, and test sets. The training data consisted of a BL and at least 1 follow-up (FU) NCCT. FU of <8 hours: FU NCCT acquired within 8 hours; FU 24H: FU NCCT acquired 8–72 hours; FU 1W: FU NCCT acquired 72  hours to 2 weeks after endovascular treatment or randomization. Validation and test sets were constructed with data from the studies by Konduri et al18 and Hssayeni et al.19 8H indicates 8 hours.

Training Data and Training Protocol

All NCCT volumes were converted from DICOM to NIfTI format with dcm2niix available in MRIcroGL, Version 1.2.20211006.22 Elastix, Version 5.0.0 (https://elastix.lumc.nl/) was used to coregister the follow-up and BL NCCTs of the training data;23 the scan with the thinnest slices was used as a moving image. Poor coregistration was detected by inspecting the overlay of the 2 images at the 30th, 50th, and 80th percentile sections. Up to 3 follow-up NCCTs were used per patient if clinical deterioration occurred within 8 hours after endovascular treatment or randomization (8 hours) and as part of the imaging protocols of 8–72  hours (24H) and 72  hours to 2 weeks (1W) after AIS.15⇓-17 To ensure stable optimization and prevent overfitting, per training iteration, we used 1 follow-up NCCT and 1 corresponding BL NCCT section (512 × 512). Slices were sampled at random on the 10th and the 95th percentile sections. Furthermore, to emphasize the variation in attenuation between different tissues, the generator model received 2D slices from follow-up NCCTs with 2 channels on the basis of different Hounsfield unit ranges as input: The attenuation was clipped between both 0 and 100 HU for brain and infarct differentiation and 100 and 1000 HU for hemorrhage and skull differentiation. The images were subsequently normalized to a −1 to 1 range. BL NCCTs were only clipped between 0 and 100 HU and normalized to a −1 to 1 range. The discriminator network receives 2D slices of either generated BL or real BL NCCT scans. To make the FU2BL-GAN robust to differences in contrast and noise between the BL and follow-up NCCTs, we applied multiple intensity and noise-altering image augmentations (details available in the Online Supplemental Data). A batch size of 2 with a learning rate of 0.00002 for 500 epochs was used, subsequently linearly reduced to zero over the following 500 epochs (Nvidia TITAN V [https://www.nvidia.com/en-us/titan/titan-v/] with 12-gb RAM). The Online Supplemental Data contains a detailed description of the FU2BL-GAN architecture.

Lesion Segmentation

To obtain lesion segmentations, we passed validation and test set NCCTs through the generator model to generate difference maps. Due to computational constraints, the validation set difference maps were computed every 10th training epoch. Subsequently, segmentations were obtained by applying a threshold to the difference maps. The resulting Dice similarity coefficient (DSC) of the segmentations relative to the ground truth was used to determine the optimal threshold for the difference map −0.2 to +0.3 with steps of 0.01 (equivalent to 0.5 HU). An automatically computed brain mask based on intensity thresholds and region growing was used to remove false-positive segmentations that were not allocated inside the skull.9 In the Online Supplemental Data, validation set results are depicted. Finally, the optimal epoch and threshold were used to obtain segmentations for the test sets.

Evaluation and Outcome Metrics

Reported results were based on test set segmentations and were reported relative to expert-based ground truth segmentations. The DSC and Hausdorff distance in millimeters were used to compute spatial correspondence. Results from the FU2BL-GAN approach trained with L1 and adversarial loss (L1+adv) were compared statistically with the Wilcoxon rank-sum test with a simpler approach trained with L1-loss (L1) only. Furthermore, the results of the FU2BL-GAN were compared with two 2D Unets trained on segmentations from the 24H and 1W infarct validation sets using the no new Unet (nnUnet) framework as a conventional supervised learning BL.24 Volumetric correspondence between the ground truth and predicted segmentations were analyzed with Bland-Altman plots with bias (mean between methods) and limits of agreement (LoA, ±1.96 SDs from the bias) and the intraclass correlation coefficient (ICC) with 95% CIs. The 2-way mixed-effects approach for consistency of a single fixed rate was used to describe differences between the FU2BL-GAN-based segmentations and the expert-based ground truth lesion segmentations. A subgroup analysis was performed for lesions of >10 mL to address the effect of lesion size on our outcome metrics. Results were reported as median with interquartile range (IQR) or mean with 95% CIs.

RESULTS

Ischemic and hemorrhagic lesions in our test sets were relatively small; the distribution of volumes was skewed toward smaller lesions. Ground truth lesion volume of test sets had a median of 35 mL (IQR, 16–78 mL) in the 24H and 66 mL (IQR, 29–125mL) and in the 1W infarct NCCTs, respectively. For the HT and PrH test sets respectively, the mean lesion size was 6 mL (IQR, 2—12 mL) and 6 mL (IQR, 1–12 mL). Characteristics of the training data can be found in the Online Supplemental Data. Training characteristics and the optimal difference map thresholds are available in the Online Supplemental Data.

Quantitative Results

As depicted in Figs 3 and the Online Supplemental Data, DSC and lesion volume were positively related. The median DSC of the FU2BL-GAN was 0.31 (IQR, 0.08–0.59) in the 24H infarct test set, 0.59 (IQR, 0.29–0.74) for the 1W infarct test set, 0.02 (IQR, 0–0.14) for the HT test set, and 0.08 (IQR, 0.01–0.35) for the PrH test set. The FU2BL-GAN (L1+adv) had a statistically significant higher DSC than the model trained with only L1-loss (L1) for all test sets but a significantly lower DSC compared with the nnUnet approach (Fig 3). The subgroup of lesions of >10 mL (Fig 3B) had a higher DSC than the overall population (Fig 3A), especially for the HT (median, 0.46; IQR, 0.07–0.51) and PrH (median, 0.44; IQR, 0.24–0.55) test sets but also for the follow-up infarct (24H infarct: median, 0.41; IQR, 0.15–0.62; 1W infarct: median, 0.60; IQR, 0.35–0.75) test sets. For all the infarct and hemorrhage test sets, the Hausdorff distances of both the FU2BL-GAN and L1 approach were poor with a median varying between 83 and 87 mm (Online Supplemental Data). Bland-Altman plots of the 4 test sets are depicted in Fig 4. Bias and LoA for the FUB2BL-GAN of the 24H (bias, −3 mL; LoA, −64–59 mL) and 1W (bias, –4 mL; LoA, –66–58 mL) test sets were low, representing good correspondence of the segmentations with ground truth annotations. However, both the HT (bias, 22 mL; LoA, –49–92 mL) and PrH (bias, 23 mL; LoA, –10–57 mL) segmentations overestimated lesion volume and had several outliers that affected volumetric correspondence. The ICC for the FUB2BL-GAN was excellent in the 1W infarct test set (ICC, 0.90; 95% CI, 0.83–0.93), good in the 24H infarct (ICC, 0.83; 95% CI, 0.78–0.88) and PrH (ICC, 0.84; 95% CI, 0.66–0.93) test sets, and poor in the HT (ICC, 0.11; 95% CI, –0.15–0.36) test sets. However, the nnUnet approach resulted in a lower bias and LoA, a higher ICC, and a lower Hausdorff distance than the FU2BL-GAN for both the 24H (bias, –14 mL; LoA, −57–29 mL; ICC, 0.91; 95% CI, 0.88–0.92; Hausdorff distance, 28 mm; IQR, 18–42 mm) and 1W (bias, –6 mL; LoA, –36–23 mL; ICC, 0.98; 95% CI, 0.97–0.99; Hausdorff distance, 21 mm; IQR, 14–37 mm) infarct lesions test sets (Online Supplemental Data).

FIG 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 3.

Dice similarity coefficients of test sets: 24-hour follow-up after AIS (24H infarct), 1-week follow-up after AIS (1W infarct), HT, and PrH. A, The results of all the test set data. B, Only results from lesions that are >10 mL. Each shade of color represents the results based on the supervised nnUnet approach, the FU2BL-GAN approach trained with L1+adv, and the generator trained with L1-loss only (L1) respectively. The Asterisk indicates P < .05; double asterisks, P < .001; triple asterisks, P < 1e-10; NS, nonsignificant difference.

FIG 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 4.

Bland-Altman plots of predicted lesion size for the FU2BL-GAN. A, 24H infarct follow-up. B, 1W infarct follow-up. C, HT. D, PrH.

FIG 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 5.

Visual results of the FU2BL-GAN. The first column contains the input NCCT with lesion used as input for the generator model to generate a difference map (column 2). The difference map is subtracted from the input NCCT (column 1) to obtain a generated BL scan (column 3). The negative (blue) and positive (red) values of the difference map correspond to the deviation of the difference map from zero. A higher deviation from zero implies a higher attenuation adjustment of the follow-up NCCT to generate the BL NCCT without a lesion. Column 4 contains the ground truth lesion annotations. Arrows show false-positive hemorrhage (rows 3 and 4), false-negative infarct (row 5, upper arrow), false-positive infarct (row 5, lower arrow), and false-negative hemorrhage segmentation (arrow, row 6).

Qualitative Visual Results

Figure 5 depicts visual examples of each test set in the first 4 rows, while the last 2 rows depict examples with poor segmentation performance. In contrast to the examples shown in the first 3 rows, the input NCCT of the PrH test set is the acute-phase NCCT with hemorrhagic lesions. For this case, the generated scan can be regarded as a prehemorrhagic stroke NCCT scan. Although lesions visually appear to be removed accurately, the generator model was not able to completely reconstruct 24H infarcted brain tissue similar to the BL NCCTs (columns 1 versus 3). False-positive hemorrhage segmentations were present when the input NCCT scan had beam-hardening artifacts in the brain close to the skull or when the overall scan attenuation was higher (arrows in PrH and HT column 2). False-negative hemorrhage segmentations were present if the hemorrhage was small and the attenuation increase was low (row 6, poor HT). False-positive infarct segmentations occurred close to the ventricles and other locations, where CSF results in a hypoattenuated region (row 5). False-negative infarct segmentation errors mainly occurred in the 24H infarct data set because the infarct lesion was not yet significantly hypoattenuated (row 5).

DISCUSSION

Our study shows that when one uses a GAN deep learning structure, it is possible to obtain follow-up ischemic and large hemorrhagic lesion segmentations without using manually annotated training data. Although the visual quality of generated BL scans was not always optimal, lesion segmentation quality was often not affected. External validation in 4 test sets revealed reasonable segmentation quality in terms of DSC and good-to-excellent volumetric correspondence with the ground truth for follow-up infarct lesions in NCCT at 24H and 1W follow-up after AIS. In terms of DSC and volumetric correspondence, our work performs on a par with previous work on supervised deep-for-follow-up infarct lesion segmentation (DSC median, 0.57 [SD:0.26]; ICC, 0.88).9 However, the presented unsupervised FU2BL-GAN did not outperform the supervised nnUnet benchmark model with respect to all outcome measures. Kuang et al25 also used a GAN to segment infarct lesions but achieved much higher segmentation quality (DSC mean, 0.70 [SD, 0.12]). However, the approach by Kuang et al required a training set with manual lesion annotations because the adversarial (GAN) loss was used in addition to the supervised loss functions. DSC and volumetric correspondence for segmenting the HT and PrH lesions were worse than those of existing supervised methods.4,5,8,10 Poor detection and segmentation of hemorrhagic lesions are likely due to the small lesion size in our test sets and an under-representation of hemorrhages in the training data.

The unsupervised approach to training is a major advantage compared with conventional supervised deep learning methods. With the growing availability of unlabeled and weakly labeled imaging data bases, unsupervised GAN-based approaches can be used without the manual annotation effort for automated lesion segmentation. However, the downside of the presented approach is the requirement of paired training images, coregistered images with and without lesions from the same patient. Such high-quality registration is often difficult to achieve when considering medical imaging because most organs, tissues, and body parts deform or move between acquisition moments. Because the brain only slightly deforms and moves between acquisition moments, the use of a GAN-based lesion segmentation method similar to the presented FU2BL-GAN seems promising for other brain pathologies.

One of the main shortcomings of the presented FU2BL-GAN is that it can only be trained on CT slices sampled at random. Because not every section in an NCCT volume of a patient contains an initial AIS lesion and only a minority of the volumes contain a hemorrhagic lesion, the FU2BL-GAN likely experienced an under-representation of brain lesions. This under-representation during the training of NCCT slices with a lesion, especially with a hemorrhagic lesion, compared with slices without lesions, is known to result in poorer segmentation performance; in technical literature, this is often referred to as the “class imbalance problem.”26 In contrast, supervised deep learning methods often use adjusted sampling techniques that require ground truth annotations;8 most class (nonlesion tissue) is undersampled relative to the minority class (the lesion) to balance class representation. A valuable improvement in our FU2BL-GAN would be to manually classify slices for lesion presence and volumes for the presence of a hemorrhage. Although these section- or volume-level annotations would take some time to acquire, such sparse annotation methods are still less time-consuming than manually segmenting lesions required for supervised deep learning. Alternatively, automated NCCT-section classification algorithms for infarct or hemorrhage presence can be used to classify NCCT slices on the basis of lesion presence.27 Subsequently, this information can be used to select training data for further improvement of the FU2BL-GAN.

Although the test sets used in this study are from multiple centers, it remains largely unclear what scanners, settings, and postprocessing methods were used. Furthermore, Konduri et al18 reported extensive exclusion criteria related to the image quality and noise level, excluding 93 of 280 patients in their data set. These factors influence the ability to generalize results from this study and require additional external validation on subgroups and other data sets.

CONCLUSIONS

The presented FU2BL-GAN is an unsupervised deep learning approach trained without manual lesion annotations to segment stroke lesions. With the FU2BL-GAN, it is feasible to obtain automated infarct lesion segmentations with moderate DSC and good volumetric correspondence.

Acknowledgments

CONTRAST Clinical Trial Collaborators

Research Leaders

Diederik Dippel (MD, PhD),1 Charles Majoie (MD, PhD)3

Consortium Coordinator

Rick van Nuland, (PhD)24

Imaging Assessment Committee

Charles Majoie (MD, PhD)–Chair,3 Aad van der Lugt (MD, PhD)–Chair,1 Adriaan van Es, (MD, PhD),1,2 Pieter-Jan van Doormaal (MD),1 René van den Berg, (MD, PhD),3 Ludo Beenen (MD),3 Bart Emmer (MD, PhD),3 Stefan Roosendaal (MD, PhD),3 Wim van Zwam (MD, PhD),4 Alida Annechien Postma (MD, PhD),25 Lonneke Yo (MD, PhD),6 Menno Krietemeijer (MD),6 Geert Lycklama (MD, PhD),7 Jasper Martens (MD),8 Sebastiaan Hammer (MD, PhD),10 Anton Meijer (MD, PhD),10 Reinoud Bokkers (MD, PhD),15 Anouk van der Hoorn (MD, PhD),15 Ido van den Wijngaard (MD, PhD),2,7 Albert Yoo (MD, PhD),26 Dick Gerrits (MD)27

Adverse Events Committee

Robert van Oostenbrugge (MD, PhD)–Chair,4 Bart Emmer (MD, PhD),3 Jonathan M. Coutinho (MD, PhD),3 Martine Truijman (MD, PhD),4 Julie Staals (MD, PHD),4 Bart van der Worp (MD, PhD),5 J. Boogaarts (MD, PhD),10 Ben Jansen (MD, PhD),16 Sanne Zinkstok (MD, PhD)28

Outcome Assessment Committee

Yvo Roos (MD, PhD)–Chair,3 Peter Koudstaal (MD, PhD),1 Diederik Dippel (MD, PhD),1 Jonathan M. Coutinho (MD, PhD),3 Koos Keizer (MD, PhD),5 Sanne Manschot (MD, PhD),7 Jelis Boiten (MD, PhD),7 Henk Kerkhoff (MD, PhD),14 Ido van den Wijngaard (MD, PhD)2,7

Data Management Group

Hester Lingsma (PhD),1 Diederik Dippel (MD, PhD),1 Vicky Chalos (MD),1 Olvert Berkhemer (MD, PhD)1,3

Imaging Data Management

Aad van der Lugt (MD, PhD),1 Charles Majoie (MD, PhD),3 Adriaan Versteeg,1 Lennard Wolff (MD),1 Matthijs van der Sluijs (MD),1 Henk van Voorst (MD),3 Manon Tolhuisen (MSc),3

Biomaterials and Translational Group

Hugo ten Cate (MD, PhD),4 Moniek de Maat (PhD),1 Samantha Donse-Donkel (MD),1 Heleen van Beusekom (PhD),1 Aladdin Taha (MD),1 Aarazo Barakzie (MD)1

Local Collaborators

Vicky Chalos (MD, PhD),1 Rob van de Graaf (MD, PhD),1 Wouter van der Steen (MD),1 Aladdin Taha (MD),1 Samantha Donse-Donkel (MD),1 Lennard Wolff (MD),1 Kilian Treurniet (MD),3 Sophie van den Berg (MD),3 Natalie LeCouffe (MD),3 Manon Kappelhof (MD),3 Rik Reinink (MD),3 Manon Tolhuisen (MD),3 Leon Rinkel (MD),3 Josje Brouwer (MD),3 Agnetha Bruggeman (MD),3 Henk van Voorst (MD),3 Robert-Jan Goldhoorn (MD),4 Wouter Hinsenveld (MD),4 Anne Pirson (MD),4 Susan Olthuis (MD),4 Simone Uniken Venema (MD),4 Sjan Teeselink (MD),10 Lotte Sondag (MD),10 Sabine Collette (MD)15

Research Nurses

Martin Sterrenberg,1 Naziha El Ghannouti,1 Laurine van der Steen,3 Sabrina Verheesen,4 Jeannique Vranken,4 Ayla van Ahee,5 Hester Bongenaar,6 Maylee Smallegange,6 Lida Tilet,6 Joke de Meris,7 Michelle Simons,8 Wilma Pellikaan,9 Wilma van Wijngaarden,9 Kitty Blauwendraat,9 Yvonne Drabbe,11 Michelle Sandiman-Lefeber,11 Anke Katthöfer,11 Eva Ponjee,12 Rieke Eilander,12 Anja van Loon,13 Karin Kraus,13 Suze Kooij,14 Annemarie Slotboom,14 Marieke de Jong,15 Friedus van der Minne,15 Esther Santegoets16

Study Monitors

Leontien Heiligers,1 Yvonne Martens,1 Naziha El Ghannouti1

On behalf of the CONTRAST consortium collaborators.

Affiliations

1Erasmus MC University Medical Center, Rotterdam, the Netherlands; 2Leiden University Medical Center, Leiden, the Netherlands; 3Amsterdam University Medical Centers, location AMC, Amsterdam, the Netherlands; 4Cardiovascular Research Institute Maastricht (CARIM), Maastricht University Medical Centre, Maastricht, The Netherlands; 5University Medical Center Utrecht, Brain Center Rudolf Magnus, Utrecht, the Netherlands; 6Catharina Hospital, Eindhoven, the Netherlands; 7Haaglanden Medical Centre, the Hague, the Netherlands; 8Rijnstate Hospital, Arnhem, the Netherlands; 9St. Antonius Hospital, Nieuwegein, the Netherlands; 10Radboud University Medical Center, Nijmegen, the Netherlands; 11HagaZiekenhuis, the Hague, the Netherlands; 12Isala, Zwolle, the Netherlands; 13Amphia Hospital, Breda, the Netherlands; 14Albert Schweitzer Hospital, Dordrecht, the Netherlands; 15University Medical Center Groningen, Groningen, the Netherlands; 16Elisabeth-TweeSteden Hospital, Tilburg, the Netherlands; 17University Hospital of Nancy, Nancy, France; 18Foch Hospital, Suresnes, France; 19Fondation Rothschild Hospital, Paris, France; 20University Hospital of Bordeaux, Bordeaux, France; 21Pitié-Salpêtrière University hospital, Paris, France; 22John Radcliffe Hospital, Oxford, United Kingdom; 23University of Washington, Seattle, Washington, United States; 24Lygature, Utrecht, the Netherlands; 25School for Mental Health and Sciences (Mhens), Maastricht University Medical Center, Maastricht, The Netherlands; 26Texas Stroke Institute, Dallas-Fort Worth, Texas, United States of America; 27Medisch Spectrum Twente, Enschede, The Netherlands; 28TerGooi, Hilversum, The Netherlands.

We would also like to thank Nvidia Corporation, Santa Clara, California, for the providing of a graphics processing unit.

Footnotes

  • The funding sources were not involved in study design, monitoring, data collection, statistical analyses, interpretation of results, or manuscript writing.

  • This study was funded by the CONTRAST consortium. The CONTRAST consortium acknowledges the support from the Netherlands Cardiovascular Research Initiative, an initiative of the Dutch Heart Foundation (CVON2015-01: CONTRAST) and from the Brain Foundation of the Netherlands (HA2015.01.06). The collaboration project is additionally financed by the Ministry of Economic Affairs by means of the Public-private partnerships Allowance made available by the Top Sector Life Sciences & Health to stimulate public-private partnerships (LSHM17016). This work was funded, in part, through unrestricted funding by Stryker, Medtronic, and Cerenovus.

  • Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.

Indicates open access to non-subscribers at www.ajnr.org

References

  1. 1.↵
    1. Katramados AM,
    2. Hacein-Bey L,
    3. Varelas PN
    . What to look for on post-stroke neuroimaging. Neuroimaging Clin N Am 2018;28:649–62 doi:10.1016/j.nic.2018.06.007 pmid:30322600
    CrossRefPubMed
  2. 2.↵
    1. Park TH,
    2. Lee JK,
    3. Park MS, et al
    . Neurologic deterioration in patients with acute ischemic stroke or transient ischemic attack. Neurology 2020;95:e2178–91 doi:10.1212/WNL.0000000000010603 pmid:32817184
    CrossRefPubMed
  3. 3.↵
    1. Seners P,
    2. Turc G,
    3. Oppenheim C, et al
    . Incidence, causes and predictors of neurological deterioration occurring within 24 h following acute ischaemic stroke: a systematic review with pathophysiological implications. J Neurol Neurosurg Psychiatry 2015;86:87–94 doi:10.1136/jnnp-2014-308327 pmid:24970907
    Abstract/FREE Full Text
  4. 4.↵
    1. Karthik R,
    2. Menaka R,
    3. Johnson A, et al
    . Neuroimaging and deep learning for brain stroke detection: a review of recent advancements and future prospects. Comput Methods Programs Biomed 2020;197:105728 doi:10.1016/j.cmpb.2020.105728 pmid:32882591
    CrossRefPubMed
  5. 5.↵
    1. Yeo M,
    2. Tahayori B,
    3. Kok HK, et al
    . Review of deep learning algorithms for the automatic detection of intracranial hemorrhages on computed tomography head imaging. J Neurointerv Surg 2020;13:369–78 doi:10.1136/neurintsurg-2020-01709]9 pmid:33479036
    CrossRefPubMed
  6. 6.↵
    1. Boers AM,
    2. Jansen IG,
    3. Beenen LF, et al
    . Association of follow-up infarct volume with functional outcome in acute ischemic stroke: a pooled analysis of seven randomized trials. J Neurointerv Surg 2018;10:1137–42 doi:10.1136/neurintsurg-2017-013724 pmid:29627794
    Abstract/FREE Full Text
  7. 7.↵
    1. Hemphill JC,
    2. Greenberg SM,
    3. Anderson CS
    , et al; Council on Clinical Cardiology. Guidelines for the Management of Spontaneous Intracerebral Hemorrhage: a Guideline for Healthcare Professionals from the American Heart Association/American Stroke Association. Stroke 2015;46:2032–60 doi:10.1161/STR.0000000000000069 pmid:26022637
    Abstract/FREE Full Text
  8. 8.↵
    1. Barros RS,
    2. van der Steen WE,
    3. Boers AMM, et al
    . Automated segmentation of subarachnoid hemorrhages with convolutional neural networks. Informatics Med Unlocked 2020;19:100321 doi:10.1016/j.imu.2020.100321
    CrossRef
  9. 9.↵
    1. Barros RS,
    2. Tolhuisen ML,
    3. Boers AM, et al
    . Automatic segmentation of cerebral infarcts in follow-up computed tomography images with convolutional neural networks. J Neurointerv Surg 2020;12:848–52 doi:10.1136/neurintsurg-2019-015471 pmid:31871069
    Abstract/FREE Full Text
  10. 10.↵
    1. Zhao X,
    2. Chen K,
    3. Wu G, et al
    . Deep learning shows good reliability for automatic segmentation and volume measurement of brain hemorrhage, intraventricular extension, and peripheral edema. Eur Radiol 2021;31:5012–20 doi:10.1007/s00330-020-07558-2 pmid:33409788
    CrossRefPubMed
  11. 11.↵
    1. Goodfellow I,
    2. Bengio Y,
    3. Courville A
    . Deep learning. MIT Press 2016. deeplearningbook.org. Accessed July 11, 2022
  12. 12.↵
    1. Yi X,
    2. Walia E,
    3. Babyn P
    . Generative adversarial network in medical imaging: a review. Med Image Anal 2019;58:101552 doi:10.1016/j.media.2019.101552] pmid:31521965
    CrossRefPubMed
  13. 13.↵
    1. Goodfellow IJ,
    2. Pouget-Abadie J,
    3. Mirza M, et al
    . Generative adversarial nets. Adv Neural Inf Process Syst 2014;3:2672–80
  14. 14.↵
    1. Baumgartner CF,
    2. Koch LM,
    3. Tezcan KM, et al
    . Visual Feature Attribution using Wasserstein GANs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah. June 18–23, 2018:8309–19
  15. 15.↵
    1. Treurniet KM,
    2. LeCouffe NE,
    3. Kappelhof M, et al
    ; MR CLEAN-NO IV Investigators. MR CLEAN-NO IV: intravenous treatment followed by endovascular treatment versus direct endovascular treatment for acute ischemic stroke caused by a proximal intracranial occlusion—study protocol for a randomized clinical trial. Trials 2021;22:1–15 doi:10.1186/s13063-021-05063-5 pmid:33397449
    CrossRefPubMed
  16. 16.↵
    1. Chalos V,
    2. Van De Graaf RA,
    3. Roozenbeek B, et al
    ; MR CLEAN-MED investigators. Multicenter randomized clinical trial of endovascular treatment for acute ischemic stroke: the effect of periprocedural medication—acetylsalicylic acid, unfractionated heparin, both, or neither (MR CLEAN-MED). Rationale and study design. Trials 2020;21:1–17 doi:10.1186/s13063-020-04514-9 pmid:31898511
    CrossRefPubMed
  17. 17.↵
    1. Pirson FA,
    2. Hinsenveld WH,
    3. Goldhoorn R-JB, et al
    ; MR CLEAN-LATE investigators. MR CLEAN-LATE study protocol. Trials 2021;22:160 doi:10.1186/s13063-021-05092-0 33627168 pmid:33627168
    CrossRefPubMed
  18. 18.↵
    1. Konduri P,
    2. van Kranendonk K,
    3. Boers A, et al
    ; MR CLEAN Trial Investigators (Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in the Netherlands). The role of edema in subacute lesion progression after treatment of acute ischemic stroke. Front Neurol 2021;12:705221 doi:10.3389/fneur.2021.705221 pmid:34354669
    CrossRefPubMed
  19. 19.↵
    1. Hssayeni MD,
    2. Croock MS,
    3. Salman AD, et al
    . Intracranial hemorrhage segmentation using a deep convolutional model. Data 2020;5:14–18 doi:10.3390/data5010014
    CrossRef
  20. 20.↵
    1. Van Kranendonk KR,
    2. Treurniet KM,
    3. Boers AM
    , et al; MR CLEAN investigators. Hemorrhagic transformation is associated with poor functional outcome in patients with acute ischemic stroke due to a large vessel occlusion. J Neurointerv Surg 2019;11:464–68 doi:10.1136/neurintsurg-2018-014141 pmid:30297537
    Abstract/FREE Full Text
  21. 21.↵
    1. Berkhemer OA,
    2. Fransen PS,
    3. Beumer D
    , et al. MR CLEAN Investigators, A randomized trial of intraarterial treatment for acute ischemic stroke. N Engl J Med 2015;372:11–20 doi:10.1056/NEJMoa1411587 pmid:25517348
    CrossRefPubMed
  22. 22.↵
    dcm2niix. 2021. https://github.com/rordenlab/dcm2niix. Accessed November 3, 2021
  23. 23.↵
    1. Klein S,
    2. Staring M,
    3. Murphy K, et al
    . Elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imaging 2010;29:196–205 doi:10.1109/TMI.2009.2035616 pmid:19923044
    CrossRefPubMed
  24. 24.↵
    1. Isensee F,
    2. Jaeger PF,
    3. Kohl SA, et al
    . nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2021;18:203–11 doi:10.1038/s41592-020-01008-z pmid:33288961
    CrossRefPubMed
  25. 25.↵
    1. Kuang H,
    2. Menon BK,
    3. Qiu W
    . Automated stroke lesion segmentation in non-contrast CT scans using dense multi-path contextual generative adversarial network. Phys Med Biol 2020;65:215013 doi:10.1088/1361-6560/aba166 pmid:32604080
    CrossRefPubMed
  26. 26.↵
    1. Lin TY,
    2. Goyal P,
    3. Girshick R, et al
    . Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intel 2020;42:318–27 doi:10.1109/TPAMI.2018.285882]6 pmid:30040631
    CrossRefPubMed
  27. 27.↵
    1. Flanders AE,
    2. Prevedello LM,
    3. Shih G, et al
    ; RSNA-ASNR 2019 Brain Hemorrhage CT Annotators. Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge. Radiology Artif Intell 2020;2:e209002 doi:10.1148/ryai.2020209002 pmid:33939782
    CrossRefPubMed
  • Received January 27, 2022.
  • Accepted after revision June 2, 2022.
  • © 2022 by American Journal of Neuroradiology
View Abstract
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 43 (8)
American Journal of Neuroradiology
Vol. 43, Issue 8
1 Aug 2022
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Unsupervised Deep Learning for Stroke Lesion Segmentation on Follow-up CT Based on Generative Adversarial Networks
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
H. van Voorst, P.R. Konduri, L.M. van Poppel, W. van der Steen, P.M. van der Sluijs, E.M.H. Slot, B.J. Emmer, W.H. van Zwam, Y.B.W.E.M. Roos, C.B.L.M. Majoie, G. Zaharchuk, M.W.A. Caan, H.A. Marquering
Unsupervised Deep Learning for Stroke Lesion Segmentation on Follow-up CT Based on Generative Adversarial Networks
American Journal of Neuroradiology Aug 2022, 43 (8) 1107-1114; DOI: 10.3174/ajnr.A7582

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
Unsupervised DL for Stroke Lesion Segmentation
H. van Voorst, P.R. Konduri, L.M. van Poppel, W. van der Steen, P.M. van der Sluijs, E.M.H. Slot, B.J. Emmer, W.H. van Zwam, Y.B.W.E.M. Roos, C.B.L.M. Majoie, G. Zaharchuk, M.W.A. Caan, H.A. Marquering
American Journal of Neuroradiology Aug 2022, 43 (8) 1107-1114; DOI: 10.3174/ajnr.A7582
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Crossref (13)
  • Google Scholar

This article has been cited by the following articles in journals that are participating in Crossref Cited-by Linking.

  • Endovascular treatment versus no endovascular treatment after 6–24 h in patients with ischaemic stroke and collateral flow on CT angiography (MR CLEAN-LATE) in the Netherlands: a multicentre, open-label, blinded-endpoint, randomised, controlled, phase 3 trial
    Susanne G H Olthuis, F Anne V Pirson, Florentina M E Pinckaers, Wouter H Hinsenveld, Daan Nieboer, Angelique Ceulemans, Robrecht R M M Knapen, M M Quirien Robbe, Olvert A Berkhemer, Marianne A A van Walderveen, Geert J Lycklama à Nijeholt, Maarten Uyttenboogaart, Wouter J Schonewille, P Matthijs van der Sluijs, Lennard Wolff, Henk van Voorst, Alida A Postma, Stefan D Roosendaal, Anouk van der Hoorn, Bart J Emmer, Menno G M Krietemeijer, Pieter-Jan van Doormaal, Bob Roozenbeek, Robert-Jan B Goldhoorn, Julie Staals, Inger R de Ridder, Christiaan van der Leij, Jonathan M Coutinho, H Bart van der Worp, Rob T H Lo, Reinoud P H Bokkers, Ewoud I van Dijk, Hieronymus D Boogaarts, Marieke J H Wermer, Adriaan C G M van Es, Julia H van Tuijl, Hans G J Kortman, Rob A R Gons, Lonneke S F Yo, Jan-Albert Vos, Karlijn F de Laat, Lukas C van Dijk, Ido R van den Wijngaard, Jeannette Hofmeijer, Jasper M Martens, Paul J A M Brouwers, Tomas Bulut, Michel J M Remmers, Thijs E A M de Jong, Heleen M den Hertog, Boudewijn A A M van Hasselt, Anouk D Rozeman, Otto E H Elgersma, Bas van der Veen, Davy R Sudiono, Hester F Lingsma, Yvo B W E M Roos, Charles B L M Majoie, Aad van der Lugt, Diederik W J Dippel, Wim H van Zwam, Robert J van Oostenbrugge
    The Lancet 2023 401 10385
  • Advanced Image Analysis Methods for Automated Segmentation of Subnuclear Chromatin Domains
    Philippe Johann to Berens, Geoffrey Schivre, Marius Theune, Jackson Peter, Salimata Ousmane Sall, Jérôme Mutterer, Fredy Barneche, Clara Bourbousse, Jean Molinier
    Epigenomes 2022 6 4
  • Applications of deep learning algorithms in ischemic stroke detection, segmentation, and classification
    Tanzeela Kousar, Mohd Shafry Mohd Rahim, Sajid Iqbal, Fatima Yousaf, Muhammad Sanaullah
    Artificial Intelligence Review 2025 58 5
  • A review of machine learning applications in polymer composites: advancements, challenges, and future prospects
    Manickaraj Karuppusamy, Ramakrishnan Thirumalaisamy, Sivasubramanian Palanisamy, Sudha Nagamalai, Ehab El Sayed Massoud, Nadir Ayrilmis
    Journal of Materials Chemistry A 2025
  • DRRN: Differential rectification &amp; refinement network for ischemic infarct segmentation
    Wenxue Zhou, Wenming Yang, Qingmin Liao
    CAAI Transactions on Intelligence Technology 2024 9 6
  • MR–CT image fusion method of intracranial tumors based on Res2Net
    Wei Chen, Qixuan Li, Heng Zhang, Kangkang Sun, Wei Sun, Zhuqing Jiao, Xinye Ni
    BMC Medical Imaging 2024 24 1
  • Unsupervised feature correlation model to predict breast abnormal variation maps in longitudinal mammograms
    Jun Bai, Annie Jin, Madison Adams, Clifford Yang, Sheida Nabavi
    Computerized Medical Imaging and Graphics 2024 113
  • Artificial intelligence and stroke imaging
    Jane Rondina, Parashkev Nachev
    Current Opinion in Neurology 2024
  • Contrast quality control for segmentation task based on deep learning models—Application to stroke lesion in CT imaging
    Juliette Moreau, Laura Mechtouff, David Rousseau, Omer Faruk Eker, Yves Berthezene, Tae-Hee Cho, Carole Frindel
    Frontiers in Neurology 2025 16
  • Deep Learning Applications in Imaging of Acute Ischemic Stroke: A Systematic Review and Narrative Summary
    Bin Jiang, Nancy Pham, Eric K. van Staalduinen, Yongkai Liu, Sanaz Nazari-Farsani, Amirhossein Sanaat, Henk van Voorst, Ates Fettahoglu, Donghoon Kim, Jiahong Ouyang, Ashwin Kumar, Aditya Srivatsan, Ramy Hussein, Maarten G. Lansberg, Fernando Boada, Greg Zaharchuk
    Radiology 2025 315 1

More in this TOC Section

Adult Brain

  • Diagnostic Neuroradiology of Monoclonal Antibodies
  • ML for Glioma Molecular Subtype Prediction
  • Segmentation of Brain Metastases with BLAST
Show more Adult Brain

Functional

  • Kurtosis and Epileptogenic Tubers: A Pilot Study
  • Glutaric Aciduria Type 1: DK vs. Conventional MRI
  • Brain Iron in Niemann-Pick Type C: 7T Study
Show more Functional

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editors Choice
  • Fellow Journal Club
  • Letters to the Editor

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

Special Collections

  • Special Collections

Resources

  • News and Updates
  • Turn around Times
  • Submit a Manuscript
  • Author Policies
  • Manuscript Submission Guidelines
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Submit a Case
  • Become a Reviewer/Academy of Reviewers
  • Get Peer Review Credit from Publons

Multimedia

  • AJNR Podcast
  • AJNR SCANtastic
  • Video Articles

About Us

  • About AJNR
  • Editorial Board
  • Not an AJNR Subscriber? Join Now
  • Alerts
  • Feedback
  • Advertise with us
  • Librarian Resources
  • Permissions
  • Terms and Conditions

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire