Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Intelligible Models for HealthCare: Predicting the Probability of 6-Month Unfavorable Outcome in Patients with Ischemic Stroke

Intelligible Models for HealthCare: Predicting the Probability of 6-Month Unfavorable Outcome in... Early prediction of unfavorable outcome after ischemic stroke is significant for clinical management. Machine learning as a novel computational modeling technique could help clinicians to address the challenge. We aim to investigate the applicability of machine learning models for individualized prediction in ischemic stroke patients and demonstrate the utility of various model-agnostic explanation techniques for machine learning predictions. A total of 499 consecutive patients with Unfavorable [modified Rankin Scale (mRS) score 3–6, n = 140] and favorable (mRS score 0–2, n = 359) outcome after 6-month from ischemic stroke were enrolled in this study. Four machine learning models, including Random Forest [RF], eXtreme Gradient Boosting [XGBoost], Adaptive Boosting [Adaboost] and Support Vector Machine [SVM] were performed with the area-under-the-curve (AUC): (90.20 ± 0.22)%, (86.91 ± 1.05)%, (86.49 ± 2.35)%, (81.89 ± 2.40)%, respectively. Three global interpretability techniques (Feature Importance shows the contribution of selected features, Partial Dependence Plot aims to visualize the average effect of a feature on the predicted probability of unfavorable outcome, Feature Interaction detects the change in the prediction that occurs by varying the features after considering the individual feature effects) and one local interpretability technique (Shapley Value indicates the probability of unfavorable outcome of different instances) have been applied to present the interpretability techniques via visualization. Thereby, the current study is important for better understanding intelligible healthcare analytics via explanations for the prediction of local and global levels, and potentially reduction of the mortality of patients with ischemic stroke by assisting clinicians in the decision-making process. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Neuroinformatics Springer Journals

Intelligible Models for HealthCare: Predicting the Probability of 6-Month Unfavorable Outcome in Patients with Ischemic Stroke

Loading next page...
 
/lp/springer-journals/intelligible-models-for-healthcare-predicting-the-probability-of-6-weBk0Pa0Kb
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
ISSN
1539-2791
eISSN
1559-0089
DOI
10.1007/s12021-021-09535-6
Publisher site
See Article on Publisher Site

Abstract

Early prediction of unfavorable outcome after ischemic stroke is significant for clinical management. Machine learning as a novel computational modeling technique could help clinicians to address the challenge. We aim to investigate the applicability of machine learning models for individualized prediction in ischemic stroke patients and demonstrate the utility of various model-agnostic explanation techniques for machine learning predictions. A total of 499 consecutive patients with Unfavorable [modified Rankin Scale (mRS) score 3–6, n = 140] and favorable (mRS score 0–2, n = 359) outcome after 6-month from ischemic stroke were enrolled in this study. Four machine learning models, including Random Forest [RF], eXtreme Gradient Boosting [XGBoost], Adaptive Boosting [Adaboost] and Support Vector Machine [SVM] were performed with the area-under-the-curve (AUC): (90.20 ± 0.22)%, (86.91 ± 1.05)%, (86.49 ± 2.35)%, (81.89 ± 2.40)%, respectively. Three global interpretability techniques (Feature Importance shows the contribution of selected features, Partial Dependence Plot aims to visualize the average effect of a feature on the predicted probability of unfavorable outcome, Feature Interaction detects the change in the prediction that occurs by varying the features after considering the individual feature effects) and one local interpretability technique (Shapley Value indicates the probability of unfavorable outcome of different instances) have been applied to present the interpretability techniques via visualization. Thereby, the current study is important for better understanding intelligible healthcare analytics via explanations for the prediction of local and global levels, and potentially reduction of the mortality of patients with ischemic stroke by assisting clinicians in the decision-making process.

Journal

NeuroinformaticsSpringer Journals

Published: Jul 1, 2022

Keywords: Ischemic stroke; Unfavorable outcome; Machine learning; Interpretability; Visualization

References