Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Deep learning-based pancreas volume assessment in individuals with type 1 diabetes

Deep learning-based pancreas volume assessment in individuals with type 1 diabetes Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for devel- oping type 1 diabetes ( T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing exten- sion to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume. Keywords: Automatic segmentation, Auto-segmentation, Semantic, T1D, MRI, Neural network, Machine learning, Artificial intelligence, Size Introduction advanced due to recent breakthroughs in convolutional Pancreas volume is reduced in individuals with type 1 neural networks and deep learning models [5–7]. A num- and type 2 diabetes [1] and those at risk for T1D [2, 3]. ber of studies have segmented the pancreas from abdom- Furthermore, pancreas volume increases with successful inal MRI [8, 9] and computerized tomography (CT) scans therapy in type 2 diabetes [4], suggesting that measure- [10–13]. However, these studies have not included images ment of pancreas volume may be useful in monitoring from individuals with diabetes, where altered pancreas diabetes progression and treatment response. However, morphology may affect segmentation accuracy. For calculation of the volume of the pancreas currently instance, the pancreas of individuals with diabetes has requires manual segmentation of the pancreas by a more irregular borders than individuals without diabetes trained reader, which is impractical for large clinical trials [4], which may reduce the accuracy of pancreas segmen- or studies utilizing large image repositories. tation approaches trained using only images from non- The development of algorithms to automatically seg - diabetic individuals. Segmentation of other organs using ment organs or lesions from medical images has rapidly deep learning, including the brain [14] and inner ear [15], have demonstrated the need to include individuals with pathologies that span the range of anatomical variation in order to improve the generalizability of the segmenta- *Correspondence: jack.virostko@austin.utexas.edu Department of Diagnostic Medicine, Dell Medical School, University tion. However, this approach has not been applied to seg- of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA mentation of the pancreas of individuals with diabetes. Full list of author information is available at the end of the article © The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. The Creative Commons Public Domain Dedication waiver (http:// creat iveco mmons. org/ publi cdoma in/ zero/1. 0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Roger et al. BMC Medical Imaging (2022) 22:5 Page 2 of 5 In this study, we develop a deep learning-based pancreas comprised of scans from controls and patients with T1D segmentation model trained using MRI images from (hereafter referred to as the mixed model). The models individuals with T1D to enable future studies of pancreas were trained with four-fold cross validation, with training volume in diabetes in large trials and image databases. on 120 out of 160 scans and validation on the remaining 40 scans. For the mixed model, each subset was com- Methods posed of 20 individuals with and 20 individuals without Study population T1D. We then tested the three models on unseen data This is a single site retrospective study of previously- composed of 25 individuals with T1D. The segmenta - acquired abdominal MRI. Study participants were either tion performance was evaluated using volume measure- newly enrolled or part of a previously reported MRI data- ments and the Dice coefficient, which ranges from 0 for set [3] (clinicaltrials.gov identifier NCT03585153). The no overlap between manual and deep learning-based seg- cohort of MRI scans used for analysis was composed of mentation to 1 indicating perfect alignment. 185 scans from individuals with T1D and 185 scans from age-matched controls. These studies were approved by Statistical analysis the Vanderbilt University Institutional Review Board Statistical analysis was performed in GraphPad Prism, and performed in accordance with the guidelines and version 9.2 (San Diego, CA). Differences between inde - regulations set forth by the Human Research Protections pendent groups were assessed using unpaired t-test. Program. Linear correlation was assessed by Pearson Correlation Coefficient, with p values of 0.05 considered significant. Image acquisition and processing Bland-Altman analysis was performed to assess the dif- Pancreas MRI was performed on a Philips 3T Achieva ference between deep learning-based and manual meas- scanner (Philips Healthcare, Best, The Netherlands). urements of pancreas volume versus the mean volume The image acquisition used for segmentation was a fat- measurement. suppressed T2-weighted fast-spin echo sequence with 1.5 × 1.5 × 5.0 mm spatial resolution spanning the pan- Results creas. Each MRI was composed of thirty axial slices with Representative manual and deep learning-based segmen- a matrix size of 256 × 256. Imaging was performed in two tation of the pancreas are shown for an individual with breath holds with an image acqusition time of 25 s. T1D (Fig. 1A) and individual with no pancreas pathology A radiologist (M.A.H.), blinded to the diabetes status (Fig. 1B). The individual with T1D has a smaller pancreas of each study participant, manually labeled the pancreas with a thinner body. Manual pancreas segmentation (left on the MRI images to be used as ground truth for seg- column) displays good agreement with deep learning- mentation. The network used to automatically segment based segmentation (middle column) on a representa- the pancreas was a 2D U-Net inspired by [10] and [16], tive MRI slice. The three-dimensional pancreas volume where down-convolutions were max pooling layers with constructed from manual segmentation (red) and deep size 2 × 2, up-convolutions were transposed convolutions learning-based segmentation (green) display good agree- with size 2 × 2 and stride 2, and the final layer was set ment (right column). Manual and deep learning-based to one feature channel with a sigmoidal activation func- pancreas segmentations displayed a high degree of over- tion. Each MRI slice was standardized between 0 and 1 to lap with mean Dice coefficient of 0.81 ± 0.04 and mini - account for a wide range of pixel intensities between MRI mum Dice coefficient of 0.66. scans. The loss function used during the network train - We compared performance of our three models ing was the negative of a smoothed Dice coefficient. The (trained using MRIs from controls, individuals with T1D, network was trained with Adam optimization at a learn- and a mixed model incorporating both individuals with −5 ing rate of 10 for 10 epochs and a batch size of one, in and without T1D) on an unseen cohort composed of 25 agreement with a previous study of pancreas segmenta- MRIs from individuals with T1D. The mixed model had tion [10]. Training was implemented in Keras with Ten- a higher Dice coefficient (0.792) and agreement with sorFlow backend. It took less than one hour to train the manually measured pancreas volume (R = 0.94) than network with one GeForce GTX 108 GPU. models trained using scans from only control individu- Deep learning-based pancreas segmentation was ini- als (Dice coefficient= 0.782, R = 0.91). Manual and deep tialized by providing the network with a bounding box learning-based pancreas volume measurements derived encompassing the pancreas, as previously described [10, from the mixed model showed good correlation across 13]. We then trained three different models, one using a testing cohort of individuals with and without T1D 160 scans of individuals with T1D, one with 160 scans (Fig. 2, R = 0.94), and in subsets of individuals with T1D 2 2 of control individuals, and one with 160 scans equally (R = 0.91) or controls (R = 0.93). Deep learning-based R oger et al. BMC Medical Imaging (2022) 22:5 Page 3 of 5 Fig. 1 Representative manual and deep learning-based pancreas segmentations from an individual (A) with T1D or (B) with no pancreas pathology. The representative individual with T1D was a 13-year-old male with 2-month diabetes duration (Dice coefficient = 0.82) while the representative control individual was a 15-year-old male with no known pancreas pathology (Dice coefficient = 0.84). Three dimensional overlays of manual (red) and deep learning-based (green) segmentations are shown for both representative individuals with the pancreas tail oriented to the reader’s left for best visualization. Note the smaller and thinner pancreas size in the individual with T1D T1D Control 20 40 60 80 100 -10 Mean Volume [ml] -20 Manual Pancreas Volume [ml] Fig. 3 Bland-Altman plot of the agreement between deep Fig. 2 Manual and deep learning-based pancreas volume learning-based and manual pancreas volume measurements. measurements display correlation across a cohort including The 95% limits of agreement are displayed with dotted lines. individuals with and without T1D (R = 0.94) and in subsets of Deep learning-based measurement of pancreas volume tends to 2 2 individuals with T1D (red; R = 0.91) or controls (blue; R = 0.93) underestimate pancreas size compared with manual measurements (bias = 2.7 ml), particularly at larger pancreas sizes pancreas volume measurements in individuals with T1D were significantly lower than controls (38 ± 12 ml vs. 54 volume tends to underestimate pancreas size compared ± 17 ml; p < 0.005). with manual measurements (bias = 2.7 ml). This under - Bland-Altman analysis was performed to further estimation is more pronounced at larger pancreas sizes, characterize the agreement between deep learning- as evidenced by a significantly non-zero slope in the based and manual pancreas volume measurements Bland-Altman plot (p < 0.001). The 95% limits of agree - (Fig.  3). Deep learning measurement of pancreas ment between deep learning-based and manual pan- creas volume measurements are − 8.2 to 13.5 ml. Deep Learning Pancreas Volume [ml] Manual - Deep Learning Volume [ml] Roger et al. BMC Medical Imaging (2022) 22:5 Page 4 of 5 Discussion individuals at risk for T1D or for predicting therapeutic In this study we applied a neural network to measure response. pancreas volume in individuals with T1D and dem- This study is subject to a number of limitations. Bland- onstrated agreement with manual segmentation by an Altman analysis demonstrates that pancreas volume expert reader. The deep learning-based segmentation tends to be underestimated by deep learning-based seg- calculated smaller pancreas volume in individuals with mentation, particularly for large pancreas sizes. Addi- T1D, in agreement with previous studies using manual tionally, images were acquired on a single MRI scanner segmentation [1–3], but absent the subjectivity inherent with standardized image acquisition parameters [20]. to manual segmentation. In fact, the agreement between Deep learning approaches are known to be hampered deep learning and manual pancreas segmentation in this by difference in MRI scanners and acquisition param - study outperformed the agreement between two different eters [21]. Further work is needed to establish pancreas readers performed using images derived from the same segmentation pipelines incorporating diabetes pathology study [17]. This finding highlights the subjectivity in across multisite data in order to generalize the tool estab- manual pancreas segmentations which are in turn used to lished in this study. Deep learning algorithms for pan- train deep learning models. The use of images segmented creas segmentation are undergoing rapid development by a single reader is a potential limitation of the study, and refinement [9–12]. A systematic investigation of seg - as our model does not capture the variance induced by mentation accuracy using different algorithms applied to multiple readers. However, for large studies in which use a common image dataset is needed to compare the per- of a single reader is not feasible but consistent pancreas formance of these techniques. segmentation is desired, deep learning-based measure- ment of pancreas volume can potentially increase repro- ducibility compared with the use of multiple readers. For Conclusions instance, longitudinal monitoring of pancreas size in the Deep learning-based segmentation of the pancreas can same individual, which has proven useful in tracking the reduce the time and associated cost needed for analysis natural history of T1D [3] and assessment of therapeu- of pancreas volume and mitigates inter-reader variabil- tic response [4], would benefit from deep learning across ity. The pancreas segmentation model developed in this assessments as compared with measurements made by study can be applied to large abdominal imaging sets, different readers at different time points. Pancreas seg - such as those being acquired as part of the UK Biobank mentation was improved by training with images from [22], to determine factors which influence pancreas vol - both individuals with and without T1D, as this diverse ume and lead to large interindividual variation in pan- training set can putatively capture the range of pancreas creas volume. volume and morphology present in normal and patho- logical states, as found in brain segmentation [18]. Abbreviations As a small, flexible abdominal organ with a high degree T1D: Type 1 diabetes; MRI: Magnetic resonance imaging; CT: Compute of variation among individuals in both shape and vol- tomography. ume, the pancreas is particularly challenging to segment Acknowledgements compared with proximal organs such as kidney and liver. We thank the study participants and their families for their dedication to This challenge was illustrated in previous automatic seg - diabetes research. mentation of abdominal organs in which liver, spleen, Authors’ contribution and kidney segmentation outperformed that of the pan- RR, RCC, and JV designed the experiments, performed the research, analyzed creas [8]. In this study we demonstrate Dice coefficients the data, and wrote the manuscript. JLW, DJM, and ACP performed the research and recruited participants. MAH read and outlined the MRI images. similar to segmentation performed on a dataset devoid All authors critically revised the article and approved the final version. JV of pancreas pathology [10, 13]. Importantly, when we accepts full responsibility for the work and/or the conduct of the study, had include both individuals with and without T1D to train access to the data, and controlled the decision to publish. All authors read and approved the final manuscript. the model we observe improved segmentation accuracy, whereas previous pancreas segmentation studies did not Funding include individuals with diabetes. The altered pancreas We gratefully acknowledge research support from the NIDDK (R03DK129979), the Thomas J. Beatson, Jr. Foundation (2021-003), the JDRF (3-SRA-2015- morphology found in T1D leads to more variation in 102-M-B and 3-SRA-2019-759-M-B), and the Cain Foundation-Seton-Dell image features, potentially complicating deep learning- Medical School Endowment for Collaborative Research. This project is funded based segmentation. The altered imaging features found by grant U24DK097771 via the NIDDK Information Network’s (dkNET ) New Investigator Pilot Program in Bioinformatics. This work utilized REDCap which in the pancreas in T1D may classify the pancreas of indi- is supported by UL1 TR000445 from NCATS/NIH. This work was supported by viduals with diabetes, as has been demonstrated in pan- the Vanderbilt Diabetes Research & Training Center (DK-020593). The study creatic cancer [19]. This may prove useful for identifying sponsors were not involved in the design of the study, the collection, analysis, R oger et al. BMC Medical Imaging (2022) 22:5 Page 5 of 5 and interpretation of data, writing the report, and did not impose any restric- 9. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas segmentation in MRI tions regarding the publication of the report. using graph-based decision fusion on convolutional neural networks. Med Image Comput Comput Assist Interv. 2016;9901:442–50. Availability of data and materials 10. Liu Y Liu S. U-net for pancreas segmentation in abdominal CT scans. IEEE The imaging data used and/or analyzed in this study are available from the international symposium on biomedical imaging; 2018. corresponding author upon reasonable request. 11. Panda A, Korfiatis P, Suman G, Garg SK, Polley EC, Singh DP, Chari ST, Goenka AH. Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: comparison with Declarations intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset. Med Phys. 2021;48(5):2468–81. Ethics approval and consent to participate 12. Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, Li C, Tang X. A deep learning These studies were approved by the Vanderbilt University Institutional Review framework for pancreas segmentation with multi-atlas registration and Board and performed in accordance with the guidelines and regulations set 3D level-set. Med Image Anal. 2021;68:101884. forth by the Human Research Protections Program. 13. Zhou Y XL, Shen W, Wang Y, Fishman EK, Yuille AL. A fixed-point model for pancreas segmentation in abdominal CT sScans. In: Proceedings of Consent for publication MICCAI; 2017. Not applicable. 14. Kumar P, Nagar P, Arora C, Gupta A. U-segnet: fully convolutional neural network based automated brain tissue segmentation tool. IEEE Image Competing interests Proc. 2018;3503–3507. The authors declare that they have no competing interests. 15. Vaidyanathan A, van der Lubbe MFJA, Leijenaar RTH, van Hoof M, Zerka F, Miraglio B, Primakov S, Postma AA, Bruintjes TD, Bilderbeek MAL, Author details Sebastiaan H, Dammeijer PFM, van Rompaey V, Woodruff HC, Vos W, Department of Diagnostic Medicine, Dell Medical School, University of Texas Walsh S, van de Berg R, Lambin P. Deep learning for the fully automated at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA. Depar tment segmentation of the inner ear on MRI. Sci Rep. 2021;11(1):2885. of Radiology and Radiological Sciences, Vanderbilt University Medical Center, 16. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for Nashville, TN, USA. Department of Pediatrics, Vanderbilt University Medical biomedical image segmentation. In: Proceedings of MICCAI. 2015. Center, Nashville, TN, USA. Division of Diabetes, Endocrinology, and Metabo- 17. Williams JM, Hilmes MA, Archer B, Dulaney A, Du L, Kang H, Russell WE, lism, Department of Medicine, Vanderbilt University Medical Center, Nashville, Powers AC, Moore DJ, Virostko J. Repeatability and reproducibility of TN, USA. Department of Pathology, Immunology, and Microbiology, pancreas volume measurements using MRI. Sci Rep. 2020;10(1):4767. Vanderbilt University, Nashville, TN, USA. Department of Molecular Physiology 18. Chupin M, Gerardin E, Cuingnet R, Boutet C, Lemieux L, Lehericy S, and Biophysics, Vanderbilt University, Nashville, TN, USA. V A T ennessee Valley Benali H, Garnero L, Colliot O. Alzheimer’s disease neuroimaging I: fully Healthcare System, Nashville, TN, USA. Livestrong Cancer Institutes, University automatic hippocampus segmentation and classification in Alzheimer’s of Texas at Austin, Austin, TX, USA. Department of Oncology, University disease and mild cognitive impairment applied on data from ADNI. Hip- of Texas at Austin, Austin, TX, USA. Oden Institute for Computational Engi- pocampus. 2009;19(6):579–87. neering and Sciences, University of Texas at Austin, Austin, TX, USA. 19. Liu KL, Wu T, Chen PT, Tsai YM, Roth H, Wu MS, Liao WC, Wang W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pan- Received: 1 September 2021 Accepted: 10 December 2021 creatic tissue: a retrospective study with cross-racial external validation. Lancet Digit Health. 2020;2(6):e303–13. 20. Virostko J, Craddock RC, Williams JM, Triolo TM, Hilmes MA, Kang H, Du L, Wright JJ, Kinney M, Maki JH, et al. Development of a standard- ized MRI protocol for pancreas assessment in humans. PLoS One. References 2021;16(8):e0256029. 1. Garcia TS, Rech TH, Leitao CB. Pancreatic size and fat content in diabetes: 21. Ferrari E, Bosco P, Spera G, Fantacci ME, Retico A. Common pitfalls in a systematic review and meta-analysis of imaging studies. PLoS One. machine learning applications to multi-center data: tests on the ABIDE I 2017;12(7):e0180911. and ABIDE II collections. In: Joint annual meeting ISMRM-ESMRMB; 2018. 2. Campbell-Thompson ML, Filipp SL, Grajo JR, Nambam B, Beegle R, 22. Liu Y, Basty N, Whitcher B, Bell JD, Sorokin EP, van Bruggen N, Thomas EL, Middlebrooks EH, Gurka MJ, Atkinson MA, Schatz DA, Haller MJ. Relative Cule M. Genetic architecture of 11 organ traits derived from abdominal pancreas volume is reduced in first-degree relatives of patients with type MRI using deep learning. Elife. 2021;10:e65554. 1 diabetes. Diabetes Care. 2019;42(2):281–7. 3. Virostko J, Williams J, Hilmes M, Bowman C, Wright JJ, Du L, Kang H, Rus- Publisher’s Note sell WE, Powers AC, Moore DJ. Pancreas volume declines during the first Springer Nature remains neutral with regard to jurisdictional claims in pub- year after diagnosis of type 1 diabetes and exhibits altered diffusion at lished maps and institutional affiliations. disease onset. Diabetes Care. 2019;42(2):248–57. 4. Al-Mrabeh A, Hollingsworth KG, Shaw JAM, McConnachie A, Sattar N, Lean MEJ, Taylor R. 2-year remission of type 2 diabetes and pancreas morphology: a post-hoc analysis of the DiRECT open-label, cluster- randomised trial. Lancet Diabetes Endocrinol. 2020;8(12):939–48. Re Read ady y to to submit y submit your our re researc search h ? Choose BMC and benefit fr ? Choose BMC and benefit from om: : 5. Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK (2018) Medical image analysis using convolutional neural networks: a review. J fast, convenient online submission Med Syst. 42(11):1–13. 6. Krishnamurthy S, Srinivasan K, Qaisar SM, Vincent PMDR, Chang CY. thorough peer review by experienced researchers in your field Evaluating deep neural network architectures with transfer learning for rapid publication on acceptance pneumonitis diagnosis. Comput Math Methods in Med. 2021;(4):1–12. support for research data, including large and complex data types 7. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks. • gold Open Access which fosters wider collaboration and increased citations Med Image Comput Comput Assist Interv. 2016;9901:442–450. maximum visibility for your research: over 100M website views per year 8. Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ, Lyu I, Assad A, Abramson RG, Hilmes MA et al. Fully convolutional neural networks At BMC, research is always in progress. improve abdominal organ segmentation. Proc SPIE Int Soc Opt Eng. 2018;10574:105742V. Learn more biomedcentral.com/submissions http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png BMC Medical Imaging Springer Journals

Deep learning-based pancreas volume assessment in individuals with type 1 diabetes

Deep learning-based pancreas volume assessment in individuals with type 1 diabetes

Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for devel- oping type 1 diabetes ( T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing exten- sion to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume. Keywords: Automatic segmentation, Auto-segmentation, Semantic, T1D, MRI, Neural network, Machine learning, Artificial intelligence, Size Introduction advanced due to recent breakthroughs in convolutional Pancreas volume is reduced in individuals with type 1 neural networks and deep learning models [5–7]. A num- and type 2 diabetes [1] and those at risk for T1D [2, 3]. ber of studies have segmented the pancreas from abdom- Furthermore, pancreas volume increases with successful inal MRI [8, 9] and computerized tomography (CT) scans therapy in type 2 diabetes [4], suggesting that measure- [10–13]. However, these studies have not included images ment of pancreas volume may be useful in monitoring from individuals with diabetes, where altered pancreas diabetes progression and treatment response. However, morphology may affect segmentation accuracy. For calculation of the volume of the pancreas currently instance, the pancreas of individuals with diabetes has requires manual segmentation of the pancreas by a more irregular borders than individuals without diabetes trained reader, which is impractical for large clinical trials [4], which may reduce the accuracy of pancreas segmen- or studies utilizing large image repositories. tation approaches trained using only images from non- The development of algorithms to automatically seg - diabetic individuals. Segmentation of other organs using ment organs or lesions from medical images has rapidly deep learning, including the brain [14] and inner ear [15], have demonstrated the need to include individuals with pathologies that span the range of anatomical variation in order to improve the generalizability of the segmenta- *Correspondence: jack.virostko@austin.utexas.edu Department of Diagnostic Medicine, Dell Medical School, University tion. However, this approach has not been applied to seg- of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA mentation of the pancreas of individuals with diabetes. Full list of author information is available at the end of the article © The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. The Creative Commons Public Domain Dedication waiver (http:// creat iveco mmons. org/ publi cdoma in/ zero/1. 0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Roger et al. BMC Medical Imaging (2022) 22:5 Page 2 of 5 In this study, we develop a deep learning-based pancreas comprised of scans from controls and patients with T1D segmentation model trained using MRI images from (hereafter referred to as the mixed model). The models individuals with T1D to enable future studies of pancreas were trained with four-fold cross validation, with training volume in diabetes in large trials and image databases. on 120 out of 160 scans and validation on the remaining 40 scans. For the mixed model, each subset was com- Methods posed of 20 individuals with and 20 individuals without Study population T1D. We then tested the three models on unseen data This is a single site retrospective study of previously- composed of 25 individuals with T1D. The segmenta - acquired abdominal MRI. Study participants were either tion performance was evaluated using volume measure- newly enrolled or part of a previously reported MRI data- ments and the Dice coefficient, which ranges from 0 for set [3] (clinicaltrials.gov identifier NCT03585153). The no overlap between manual and deep learning-based seg- cohort of MRI scans used for analysis was composed of mentation to 1 indicating perfect alignment. 185 scans from individuals with T1D and 185 scans from age-matched controls. These studies were approved by Statistical analysis the Vanderbilt University Institutional Review Board Statistical analysis was performed in GraphPad Prism, and performed in accordance with the guidelines and version 9.2 (San Diego, CA). Differences between inde - regulations set forth by the Human Research Protections pendent groups were assessed using unpaired t-test. Program. Linear correlation was assessed by Pearson Correlation Coefficient, with p values of 0.05 considered significant. Image acquisition and processing Bland-Altman analysis was performed to assess the dif- Pancreas MRI was performed on a Philips 3T Achieva ference between deep learning-based and manual meas- scanner (Philips Healthcare, Best, The Netherlands). urements of pancreas volume versus the mean volume The image acquisition used for segmentation was a fat- measurement. suppressed T2-weighted fast-spin echo sequence with 1.5 × 1.5 × 5.0 mm spatial resolution spanning the pan- Results creas. Each MRI was composed of thirty axial slices with Representative manual and deep learning-based segmen- a matrix size of 256 × 256. Imaging was performed in two tation of the pancreas are shown for an individual with breath holds with an image acqusition time of 25 s. T1D (Fig. 1A) and individual with no pancreas pathology A radiologist (M.A.H.), blinded to the diabetes status (Fig. 1B). The individual with T1D has a smaller pancreas of each study participant, manually labeled the pancreas with a thinner body. Manual pancreas segmentation (left on the MRI images to be used as ground truth for seg- column) displays good agreement with deep learning- mentation. The network used to automatically segment based segmentation (middle column) on a representa- the pancreas was a 2D U-Net inspired by [10] and [16], tive MRI slice. The three-dimensional pancreas volume where down-convolutions were max pooling layers with constructed from manual segmentation (red) and deep size 2 × 2, up-convolutions were transposed convolutions learning-based segmentation (green) display good agree- with size 2 × 2 and stride 2, and the final layer was set ment (right column). Manual and deep learning-based to one feature channel with a sigmoidal activation func- pancreas segmentations displayed a high degree of over- tion. Each MRI slice was standardized between 0 and 1 to lap with mean Dice coefficient of 0.81 ± 0.04 and mini - account for a wide range of pixel intensities between MRI mum Dice coefficient of 0.66. scans. The loss function used during the network train - We compared performance of our three models ing was the negative of a smoothed Dice coefficient. The (trained using MRIs from controls, individuals with T1D, network was trained with Adam optimization at a learn- and a mixed model incorporating both individuals with −5 ing rate of 10 for 10 epochs and a batch size of one, in and without T1D) on an unseen cohort composed of 25 agreement with a previous study of pancreas segmenta- MRIs from individuals with T1D. The mixed model had tion [10]. Training was implemented in Keras with Ten- a higher Dice coefficient (0.792) and agreement with sorFlow backend. It took less than one hour to train the manually measured pancreas volume (R = 0.94) than network with one GeForce GTX 108 GPU. models trained using scans from only control individu- Deep learning-based pancreas segmentation was ini- als (Dice coefficient= 0.782, R = 0.91). Manual and deep tialized by providing the network with a bounding box learning-based pancreas volume measurements derived encompassing the pancreas, as previously described [10, from the mixed model showed good correlation across 13]. We then trained three different models, one using a testing cohort of individuals with and without T1D 160 scans of individuals with T1D, one with 160 scans (Fig. 2, R = 0.94), and in subsets of individuals with T1D 2 2 of control individuals, and one with 160 scans equally (R = 0.91) or controls (R = 0.93). Deep learning-based R oger et al. BMC Medical Imaging (2022) 22:5 Page 3 of 5 Fig. 1 Representative manual and deep learning-based pancreas segmentations from an individual (A) with T1D or (B) with no pancreas pathology. The representative individual with T1D was a 13-year-old male with 2-month diabetes duration (Dice coefficient = 0.82) while the representative control individual was a 15-year-old male with no known pancreas pathology (Dice coefficient = 0.84). Three dimensional overlays of manual (red) and deep learning-based (green) segmentations are shown for both representative individuals with the pancreas tail oriented to the reader’s left for best visualization. Note the smaller and thinner pancreas size in the individual with T1D T1D Control 20 40 60 80 100 -10 Mean Volume [ml] -20 Manual Pancreas Volume [ml] Fig. 3 Bland-Altman plot of the agreement between deep Fig. 2 Manual and deep learning-based pancreas volume learning-based and manual pancreas volume measurements. measurements display correlation across a cohort including The 95% limits of agreement are displayed with dotted lines. individuals with and without T1D (R = 0.94) and in subsets of Deep learning-based measurement of pancreas volume tends to 2 2 individuals with T1D (red; R = 0.91) or controls (blue; R = 0.93) underestimate pancreas size compared with manual measurements (bias = 2.7 ml), particularly at larger pancreas sizes pancreas volume measurements in individuals with T1D were significantly lower than controls (38 ± 12 ml vs. 54 volume tends to underestimate pancreas size compared ± 17 ml; p < 0.005). with manual measurements (bias = 2.7 ml). This under - Bland-Altman analysis was performed to further estimation is more pronounced at larger pancreas sizes, characterize the agreement between deep learning- as evidenced by a significantly non-zero slope in the based and manual pancreas volume measurements Bland-Altman plot (p < 0.001). The 95% limits of agree - (Fig.  3). Deep learning measurement of pancreas ment between deep learning-based and manual pan- creas volume measurements are − 8.2 to 13.5 ml. Deep Learning Pancreas Volume [ml] Manual - Deep Learning Volume [ml] Roger et al. BMC Medical Imaging (2022) 22:5 Page 4 of 5 Discussion individuals at risk for T1D or for predicting therapeutic In this study we applied a neural network to measure response. pancreas volume in individuals with T1D and dem- This study is subject to a number of limitations. Bland- onstrated agreement with manual segmentation by an Altman analysis demonstrates that pancreas volume expert reader. The deep learning-based segmentation tends to be underestimated by deep learning-based seg- calculated smaller pancreas volume in individuals with mentation, particularly for large pancreas sizes. Addi- T1D, in agreement with previous studies using manual tionally, images were acquired on a single MRI scanner segmentation [1–3], but absent the subjectivity inherent with standardized image acquisition parameters [20]. to manual segmentation. In fact, the agreement between Deep learning approaches are known to be hampered deep learning and manual pancreas segmentation in this by difference in MRI scanners and acquisition param - study outperformed the agreement between two different eters [21]. Further work is needed to establish pancreas readers performed using images derived from the same segmentation pipelines incorporating diabetes pathology study [17]. This finding highlights the subjectivity in across multisite data in order to generalize the tool estab- manual pancreas segmentations which are in turn used to lished in this study. Deep learning algorithms for pan- train deep learning models. The use of images segmented creas segmentation are undergoing rapid development by a single reader is a potential limitation of the study, and refinement [9–12]. A systematic investigation of seg - as our model does not capture the variance induced by mentation accuracy using different algorithms applied to multiple readers. However, for large studies in which use a common image dataset is needed to compare the per- of a single reader is not feasible but consistent pancreas formance of these techniques. segmentation is desired, deep learning-based measure- ment of pancreas volume can potentially increase repro- ducibility compared with the use of multiple readers. For Conclusions instance, longitudinal monitoring of pancreas size in the Deep learning-based segmentation of the pancreas can same individual, which has proven useful in tracking the reduce the time and associated cost needed for analysis natural history of T1D [3] and assessment of therapeu- of pancreas volume and mitigates inter-reader variabil- tic response [4], would benefit from deep learning across ity. The pancreas segmentation model developed in this assessments as compared with measurements made by study can be applied to large abdominal imaging sets, different readers at different time points. Pancreas seg - such as those being acquired as part of the UK Biobank mentation was improved by training with images from [22], to determine factors which influence pancreas vol - both individuals with and without T1D, as this diverse ume and lead to large interindividual variation in pan- training set can putatively capture the range of pancreas creas volume. volume and morphology present in normal and patho- logical states, as found in brain segmentation [18]. Abbreviations As a small, flexible abdominal organ with a high degree T1D: Type 1 diabetes; MRI: Magnetic resonance imaging; CT: Compute of variation among individuals in both shape and vol- tomography. ume, the pancreas is particularly challenging to segment Acknowledgements compared with proximal organs such as kidney and liver. We thank the study participants and their families for their dedication to This challenge was illustrated in previous automatic seg - diabetes research. mentation of abdominal organs in which liver, spleen, Authors’ contribution and kidney segmentation outperformed that of the pan- RR, RCC, and JV designed the experiments, performed the research, analyzed creas [8]. In this study we demonstrate Dice coefficients the data, and wrote the manuscript. JLW, DJM, and ACP performed the research and recruited participants. MAH read and outlined the MRI images. similar to segmentation performed on a dataset devoid All authors critically revised the article and approved the final version. JV of pancreas pathology [10, 13]. Importantly, when we accepts full responsibility for the work and/or the conduct of the study, had include both individuals with and without T1D to train access to the data, and controlled the decision to publish. All authors read and approved the final manuscript. the model we observe improved segmentation accuracy, whereas previous pancreas segmentation studies did not Funding include individuals with diabetes. The altered pancreas We gratefully acknowledge research support from the NIDDK (R03DK129979), the Thomas J. Beatson, Jr. Foundation (2021-003), the JDRF (3-SRA-2015- morphology found in T1D leads to more variation in 102-M-B and 3-SRA-2019-759-M-B), and the Cain Foundation-Seton-Dell image features, potentially complicating deep learning- Medical School Endowment for Collaborative Research. This project is funded based segmentation. The altered imaging features found by grant U24DK097771 via the NIDDK Information Network’s (dkNET ) New Investigator Pilot Program in Bioinformatics. This work utilized REDCap which in the pancreas in T1D may classify the pancreas of indi- is supported by UL1 TR000445 from NCATS/NIH. This work was supported by viduals with diabetes, as has been demonstrated in pan- the Vanderbilt Diabetes Research & Training Center (DK-020593). The study creatic cancer [19]. This may prove useful for identifying sponsors were not involved in the design of the study, the collection, analysis, R oger et al. BMC Medical Imaging (2022) 22:5 Page 5 of 5 and interpretation of data, writing the report, and did not impose any restric- 9. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas segmentation in MRI tions regarding the publication of the report. using graph-based decision fusion on convolutional neural networks. Med Image Comput Comput Assist Interv. 2016;9901:442–50. Availability of data and materials 10. Liu Y Liu S. U-net for pancreas segmentation in abdominal CT scans. IEEE The imaging data used and/or analyzed in this study are available from the international symposium on biomedical imaging; 2018. corresponding author upon reasonable request. 11. Panda A, Korfiatis P, Suman G, Garg SK, Polley EC, Singh DP, Chari ST, Goenka AH. Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: comparison with Declarations intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset. Med Phys. 2021;48(5):2468–81. Ethics approval and consent to participate 12. Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, Li C, Tang X. A deep learning These studies were approved by the Vanderbilt University Institutional Review framework for pancreas segmentation with multi-atlas registration and Board and performed in accordance with the guidelines and regulations set 3D level-set. Med Image Anal. 2021;68:101884. forth by the Human Research Protections Program. 13. Zhou Y XL, Shen W, Wang Y, Fishman EK, Yuille AL. A fixed-point model for pancreas segmentation in abdominal CT sScans. In: Proceedings of Consent for publication MICCAI; 2017. Not applicable. 14. Kumar P, Nagar P, Arora C, Gupta A. U-segnet: fully convolutional neural network based automated brain tissue segmentation tool. IEEE Image Competing interests Proc. 2018;3503–3507. The authors declare that they have no competing interests. 15. Vaidyanathan A, van der Lubbe MFJA, Leijenaar RTH, van Hoof M, Zerka F, Miraglio B, Primakov S, Postma AA, Bruintjes TD, Bilderbeek MAL, Author details Sebastiaan H, Dammeijer PFM, van Rompaey V, Woodruff HC, Vos W, Department of Diagnostic Medicine, Dell Medical School, University of Texas Walsh S, van de Berg R, Lambin P. Deep learning for the fully automated at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA. Depar tment segmentation of the inner ear on MRI. Sci Rep. 2021;11(1):2885. of Radiology and Radiological Sciences, Vanderbilt University Medical Center, 16. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for Nashville, TN, USA. Department of Pediatrics, Vanderbilt University Medical biomedical image segmentation. In: Proceedings of MICCAI. 2015. Center, Nashville, TN, USA. Division of Diabetes, Endocrinology, and Metabo- 17. Williams JM, Hilmes MA, Archer B, Dulaney A, Du L, Kang H, Russell WE, lism, Department of Medicine, Vanderbilt University Medical Center, Nashville, Powers AC, Moore DJ, Virostko J. Repeatability and reproducibility of TN, USA. Department of Pathology, Immunology, and Microbiology, pancreas volume measurements using MRI. Sci Rep. 2020;10(1):4767. Vanderbilt University, Nashville, TN, USA. Department of Molecular Physiology 18. Chupin M, Gerardin E, Cuingnet R, Boutet C, Lemieux L, Lehericy S, and Biophysics, Vanderbilt University, Nashville, TN, USA. V A T ennessee Valley Benali H, Garnero L, Colliot O. Alzheimer’s disease neuroimaging I: fully Healthcare System, Nashville, TN, USA. Livestrong Cancer Institutes, University automatic hippocampus segmentation and classification in Alzheimer’s of Texas at Austin, Austin, TX, USA. Department of Oncology, University disease and mild cognitive impairment applied on data from ADNI. Hip- of Texas at Austin, Austin, TX, USA. Oden Institute for Computational Engi- pocampus. 2009;19(6):579–87. neering and Sciences, University of Texas at Austin, Austin, TX, USA. 19. Liu KL, Wu T, Chen PT, Tsai YM, Roth H, Wu MS, Liao WC, Wang W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pan- Received: 1 September 2021 Accepted: 10 December 2021 creatic tissue: a retrospective study with cross-racial external validation. Lancet Digit Health. 2020;2(6):e303–13. 20. Virostko J, Craddock RC, Williams JM, Triolo TM, Hilmes MA, Kang H, Du L, Wright JJ, Kinney M, Maki JH, et al. Development of a standard- ized MRI protocol for pancreas assessment in humans. PLoS One. References 2021;16(8):e0256029. 1. Garcia TS, Rech TH, Leitao CB. Pancreatic size and fat content in diabetes: 21. Ferrari E, Bosco P, Spera G, Fantacci ME, Retico A. Common pitfalls in a systematic review and meta-analysis of imaging studies. PLoS One. machine learning applications to multi-center data: tests on the ABIDE I 2017;12(7):e0180911. and ABIDE II collections. In: Joint annual meeting ISMRM-ESMRMB; 2018. 2. Campbell-Thompson ML, Filipp SL, Grajo JR, Nambam B, Beegle R, 22. Liu Y, Basty N, Whitcher B, Bell JD, Sorokin EP, van Bruggen N, Thomas EL, Middlebrooks EH, Gurka MJ, Atkinson MA, Schatz DA, Haller MJ. Relative Cule M. Genetic architecture of 11 organ traits derived from abdominal pancreas volume is reduced in first-degree relatives of patients with type MRI using deep learning. Elife. 2021;10:e65554. 1 diabetes. Diabetes Care. 2019;42(2):281–7. 3. Virostko J, Williams J, Hilmes M, Bowman C, Wright JJ, Du L, Kang H, Rus- Publisher’s Note sell WE, Powers AC, Moore DJ. Pancreas volume declines during the first Springer Nature remains neutral with regard to jurisdictional claims in pub- year after diagnosis of type 1 diabetes and exhibits altered diffusion at lished maps and institutional affiliations. disease onset. Diabetes Care. 2019;42(2):248–57. 4. Al-Mrabeh A, Hollingsworth KG, Shaw JAM, McConnachie A, Sattar N, Lean MEJ, Taylor R. 2-year remission of type 2 diabetes and pancreas morphology: a post-hoc analysis of the DiRECT open-label, cluster- randomised trial. Lancet Diabetes Endocrinol. 2020;8(12):939–48. Re Read ady y to to submit y submit your our re researc search h ? Choose BMC and benefit fr ? Choose BMC and benefit from om: : 5. Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK (2018) Medical image analysis using convolutional neural networks: a review. J fast, convenient online submission Med Syst. 42(11):1–13. 6. Krishnamurthy S, Srinivasan K, Qaisar SM, Vincent PMDR, Chang CY. thorough peer review by experienced researchers in your field Evaluating deep neural network architectures with transfer learning for rapid publication on acceptance pneumonitis diagnosis. Comput Math Methods in Med. 2021;(4):1–12. support for research data, including large and complex data types 7. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks. • gold Open Access which fosters wider collaboration and increased citations Med Image Comput Comput Assist Interv. 2016;9901:442–450. maximum visibility for your research: over 100M website views per year 8. Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ, Lyu I, Assad A, Abramson RG, Hilmes MA et al. Fully convolutional neural networks At BMC, research is always in progress. improve abdominal organ segmentation. Proc SPIE Int Soc Opt Eng. 2018;10574:105742V. Learn more biomedcentral.com/submissions
Loading next page...
 
/lp/springer-journals/deep-learning-based-pancreas-volume-assessment-in-individuals-with-U05021azzI
Publisher
Springer Journals
Copyright
Copyright © The Author(s) 2021
eISSN
1471-2342
DOI
10.1186/s12880-021-00729-7
Publisher site
See Article on Publisher Site

Abstract

Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for devel- oping type 1 diabetes ( T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing exten- sion to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume. Keywords: Automatic segmentation, Auto-segmentation, Semantic, T1D, MRI, Neural network, Machine learning, Artificial intelligence, Size Introduction advanced due to recent breakthroughs in convolutional Pancreas volume is reduced in individuals with type 1 neural networks and deep learning models [5–7]. A num- and type 2 diabetes [1] and those at risk for T1D [2, 3]. ber of studies have segmented the pancreas from abdom- Furthermore, pancreas volume increases with successful inal MRI [8, 9] and computerized tomography (CT) scans therapy in type 2 diabetes [4], suggesting that measure- [10–13]. However, these studies have not included images ment of pancreas volume may be useful in monitoring from individuals with diabetes, where altered pancreas diabetes progression and treatment response. However, morphology may affect segmentation accuracy. For calculation of the volume of the pancreas currently instance, the pancreas of individuals with diabetes has requires manual segmentation of the pancreas by a more irregular borders than individuals without diabetes trained reader, which is impractical for large clinical trials [4], which may reduce the accuracy of pancreas segmen- or studies utilizing large image repositories. tation approaches trained using only images from non- The development of algorithms to automatically seg - diabetic individuals. Segmentation of other organs using ment organs or lesions from medical images has rapidly deep learning, including the brain [14] and inner ear [15], have demonstrated the need to include individuals with pathologies that span the range of anatomical variation in order to improve the generalizability of the segmenta- *Correspondence: jack.virostko@austin.utexas.edu Department of Diagnostic Medicine, Dell Medical School, University tion. However, this approach has not been applied to seg- of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA mentation of the pancreas of individuals with diabetes. Full list of author information is available at the end of the article © The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. The Creative Commons Public Domain Dedication waiver (http:// creat iveco mmons. org/ publi cdoma in/ zero/1. 0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Roger et al. BMC Medical Imaging (2022) 22:5 Page 2 of 5 In this study, we develop a deep learning-based pancreas comprised of scans from controls and patients with T1D segmentation model trained using MRI images from (hereafter referred to as the mixed model). The models individuals with T1D to enable future studies of pancreas were trained with four-fold cross validation, with training volume in diabetes in large trials and image databases. on 120 out of 160 scans and validation on the remaining 40 scans. For the mixed model, each subset was com- Methods posed of 20 individuals with and 20 individuals without Study population T1D. We then tested the three models on unseen data This is a single site retrospective study of previously- composed of 25 individuals with T1D. The segmenta - acquired abdominal MRI. Study participants were either tion performance was evaluated using volume measure- newly enrolled or part of a previously reported MRI data- ments and the Dice coefficient, which ranges from 0 for set [3] (clinicaltrials.gov identifier NCT03585153). The no overlap between manual and deep learning-based seg- cohort of MRI scans used for analysis was composed of mentation to 1 indicating perfect alignment. 185 scans from individuals with T1D and 185 scans from age-matched controls. These studies were approved by Statistical analysis the Vanderbilt University Institutional Review Board Statistical analysis was performed in GraphPad Prism, and performed in accordance with the guidelines and version 9.2 (San Diego, CA). Differences between inde - regulations set forth by the Human Research Protections pendent groups were assessed using unpaired t-test. Program. Linear correlation was assessed by Pearson Correlation Coefficient, with p values of 0.05 considered significant. Image acquisition and processing Bland-Altman analysis was performed to assess the dif- Pancreas MRI was performed on a Philips 3T Achieva ference between deep learning-based and manual meas- scanner (Philips Healthcare, Best, The Netherlands). urements of pancreas volume versus the mean volume The image acquisition used for segmentation was a fat- measurement. suppressed T2-weighted fast-spin echo sequence with 1.5 × 1.5 × 5.0 mm spatial resolution spanning the pan- Results creas. Each MRI was composed of thirty axial slices with Representative manual and deep learning-based segmen- a matrix size of 256 × 256. Imaging was performed in two tation of the pancreas are shown for an individual with breath holds with an image acqusition time of 25 s. T1D (Fig. 1A) and individual with no pancreas pathology A radiologist (M.A.H.), blinded to the diabetes status (Fig. 1B). The individual with T1D has a smaller pancreas of each study participant, manually labeled the pancreas with a thinner body. Manual pancreas segmentation (left on the MRI images to be used as ground truth for seg- column) displays good agreement with deep learning- mentation. The network used to automatically segment based segmentation (middle column) on a representa- the pancreas was a 2D U-Net inspired by [10] and [16], tive MRI slice. The three-dimensional pancreas volume where down-convolutions were max pooling layers with constructed from manual segmentation (red) and deep size 2 × 2, up-convolutions were transposed convolutions learning-based segmentation (green) display good agree- with size 2 × 2 and stride 2, and the final layer was set ment (right column). Manual and deep learning-based to one feature channel with a sigmoidal activation func- pancreas segmentations displayed a high degree of over- tion. Each MRI slice was standardized between 0 and 1 to lap with mean Dice coefficient of 0.81 ± 0.04 and mini - account for a wide range of pixel intensities between MRI mum Dice coefficient of 0.66. scans. The loss function used during the network train - We compared performance of our three models ing was the negative of a smoothed Dice coefficient. The (trained using MRIs from controls, individuals with T1D, network was trained with Adam optimization at a learn- and a mixed model incorporating both individuals with −5 ing rate of 10 for 10 epochs and a batch size of one, in and without T1D) on an unseen cohort composed of 25 agreement with a previous study of pancreas segmenta- MRIs from individuals with T1D. The mixed model had tion [10]. Training was implemented in Keras with Ten- a higher Dice coefficient (0.792) and agreement with sorFlow backend. It took less than one hour to train the manually measured pancreas volume (R = 0.94) than network with one GeForce GTX 108 GPU. models trained using scans from only control individu- Deep learning-based pancreas segmentation was ini- als (Dice coefficient= 0.782, R = 0.91). Manual and deep tialized by providing the network with a bounding box learning-based pancreas volume measurements derived encompassing the pancreas, as previously described [10, from the mixed model showed good correlation across 13]. We then trained three different models, one using a testing cohort of individuals with and without T1D 160 scans of individuals with T1D, one with 160 scans (Fig. 2, R = 0.94), and in subsets of individuals with T1D 2 2 of control individuals, and one with 160 scans equally (R = 0.91) or controls (R = 0.93). Deep learning-based R oger et al. BMC Medical Imaging (2022) 22:5 Page 3 of 5 Fig. 1 Representative manual and deep learning-based pancreas segmentations from an individual (A) with T1D or (B) with no pancreas pathology. The representative individual with T1D was a 13-year-old male with 2-month diabetes duration (Dice coefficient = 0.82) while the representative control individual was a 15-year-old male with no known pancreas pathology (Dice coefficient = 0.84). Three dimensional overlays of manual (red) and deep learning-based (green) segmentations are shown for both representative individuals with the pancreas tail oriented to the reader’s left for best visualization. Note the smaller and thinner pancreas size in the individual with T1D T1D Control 20 40 60 80 100 -10 Mean Volume [ml] -20 Manual Pancreas Volume [ml] Fig. 3 Bland-Altman plot of the agreement between deep Fig. 2 Manual and deep learning-based pancreas volume learning-based and manual pancreas volume measurements. measurements display correlation across a cohort including The 95% limits of agreement are displayed with dotted lines. individuals with and without T1D (R = 0.94) and in subsets of Deep learning-based measurement of pancreas volume tends to 2 2 individuals with T1D (red; R = 0.91) or controls (blue; R = 0.93) underestimate pancreas size compared with manual measurements (bias = 2.7 ml), particularly at larger pancreas sizes pancreas volume measurements in individuals with T1D were significantly lower than controls (38 ± 12 ml vs. 54 volume tends to underestimate pancreas size compared ± 17 ml; p < 0.005). with manual measurements (bias = 2.7 ml). This under - Bland-Altman analysis was performed to further estimation is more pronounced at larger pancreas sizes, characterize the agreement between deep learning- as evidenced by a significantly non-zero slope in the based and manual pancreas volume measurements Bland-Altman plot (p < 0.001). The 95% limits of agree - (Fig.  3). Deep learning measurement of pancreas ment between deep learning-based and manual pan- creas volume measurements are − 8.2 to 13.5 ml. Deep Learning Pancreas Volume [ml] Manual - Deep Learning Volume [ml] Roger et al. BMC Medical Imaging (2022) 22:5 Page 4 of 5 Discussion individuals at risk for T1D or for predicting therapeutic In this study we applied a neural network to measure response. pancreas volume in individuals with T1D and dem- This study is subject to a number of limitations. Bland- onstrated agreement with manual segmentation by an Altman analysis demonstrates that pancreas volume expert reader. The deep learning-based segmentation tends to be underestimated by deep learning-based seg- calculated smaller pancreas volume in individuals with mentation, particularly for large pancreas sizes. Addi- T1D, in agreement with previous studies using manual tionally, images were acquired on a single MRI scanner segmentation [1–3], but absent the subjectivity inherent with standardized image acquisition parameters [20]. to manual segmentation. In fact, the agreement between Deep learning approaches are known to be hampered deep learning and manual pancreas segmentation in this by difference in MRI scanners and acquisition param - study outperformed the agreement between two different eters [21]. Further work is needed to establish pancreas readers performed using images derived from the same segmentation pipelines incorporating diabetes pathology study [17]. This finding highlights the subjectivity in across multisite data in order to generalize the tool estab- manual pancreas segmentations which are in turn used to lished in this study. Deep learning algorithms for pan- train deep learning models. The use of images segmented creas segmentation are undergoing rapid development by a single reader is a potential limitation of the study, and refinement [9–12]. A systematic investigation of seg - as our model does not capture the variance induced by mentation accuracy using different algorithms applied to multiple readers. However, for large studies in which use a common image dataset is needed to compare the per- of a single reader is not feasible but consistent pancreas formance of these techniques. segmentation is desired, deep learning-based measure- ment of pancreas volume can potentially increase repro- ducibility compared with the use of multiple readers. For Conclusions instance, longitudinal monitoring of pancreas size in the Deep learning-based segmentation of the pancreas can same individual, which has proven useful in tracking the reduce the time and associated cost needed for analysis natural history of T1D [3] and assessment of therapeu- of pancreas volume and mitigates inter-reader variabil- tic response [4], would benefit from deep learning across ity. The pancreas segmentation model developed in this assessments as compared with measurements made by study can be applied to large abdominal imaging sets, different readers at different time points. Pancreas seg - such as those being acquired as part of the UK Biobank mentation was improved by training with images from [22], to determine factors which influence pancreas vol - both individuals with and without T1D, as this diverse ume and lead to large interindividual variation in pan- training set can putatively capture the range of pancreas creas volume. volume and morphology present in normal and patho- logical states, as found in brain segmentation [18]. Abbreviations As a small, flexible abdominal organ with a high degree T1D: Type 1 diabetes; MRI: Magnetic resonance imaging; CT: Compute of variation among individuals in both shape and vol- tomography. ume, the pancreas is particularly challenging to segment Acknowledgements compared with proximal organs such as kidney and liver. We thank the study participants and their families for their dedication to This challenge was illustrated in previous automatic seg - diabetes research. mentation of abdominal organs in which liver, spleen, Authors’ contribution and kidney segmentation outperformed that of the pan- RR, RCC, and JV designed the experiments, performed the research, analyzed creas [8]. In this study we demonstrate Dice coefficients the data, and wrote the manuscript. JLW, DJM, and ACP performed the research and recruited participants. MAH read and outlined the MRI images. similar to segmentation performed on a dataset devoid All authors critically revised the article and approved the final version. JV of pancreas pathology [10, 13]. Importantly, when we accepts full responsibility for the work and/or the conduct of the study, had include both individuals with and without T1D to train access to the data, and controlled the decision to publish. All authors read and approved the final manuscript. the model we observe improved segmentation accuracy, whereas previous pancreas segmentation studies did not Funding include individuals with diabetes. The altered pancreas We gratefully acknowledge research support from the NIDDK (R03DK129979), the Thomas J. Beatson, Jr. Foundation (2021-003), the JDRF (3-SRA-2015- morphology found in T1D leads to more variation in 102-M-B and 3-SRA-2019-759-M-B), and the Cain Foundation-Seton-Dell image features, potentially complicating deep learning- Medical School Endowment for Collaborative Research. This project is funded based segmentation. The altered imaging features found by grant U24DK097771 via the NIDDK Information Network’s (dkNET ) New Investigator Pilot Program in Bioinformatics. This work utilized REDCap which in the pancreas in T1D may classify the pancreas of indi- is supported by UL1 TR000445 from NCATS/NIH. This work was supported by viduals with diabetes, as has been demonstrated in pan- the Vanderbilt Diabetes Research & Training Center (DK-020593). The study creatic cancer [19]. This may prove useful for identifying sponsors were not involved in the design of the study, the collection, analysis, R oger et al. BMC Medical Imaging (2022) 22:5 Page 5 of 5 and interpretation of data, writing the report, and did not impose any restric- 9. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas segmentation in MRI tions regarding the publication of the report. using graph-based decision fusion on convolutional neural networks. Med Image Comput Comput Assist Interv. 2016;9901:442–50. Availability of data and materials 10. Liu Y Liu S. U-net for pancreas segmentation in abdominal CT scans. IEEE The imaging data used and/or analyzed in this study are available from the international symposium on biomedical imaging; 2018. corresponding author upon reasonable request. 11. Panda A, Korfiatis P, Suman G, Garg SK, Polley EC, Singh DP, Chari ST, Goenka AH. Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: comparison with Declarations intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset. Med Phys. 2021;48(5):2468–81. Ethics approval and consent to participate 12. Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, Li C, Tang X. A deep learning These studies were approved by the Vanderbilt University Institutional Review framework for pancreas segmentation with multi-atlas registration and Board and performed in accordance with the guidelines and regulations set 3D level-set. Med Image Anal. 2021;68:101884. forth by the Human Research Protections Program. 13. Zhou Y XL, Shen W, Wang Y, Fishman EK, Yuille AL. A fixed-point model for pancreas segmentation in abdominal CT sScans. In: Proceedings of Consent for publication MICCAI; 2017. Not applicable. 14. Kumar P, Nagar P, Arora C, Gupta A. U-segnet: fully convolutional neural network based automated brain tissue segmentation tool. IEEE Image Competing interests Proc. 2018;3503–3507. The authors declare that they have no competing interests. 15. Vaidyanathan A, van der Lubbe MFJA, Leijenaar RTH, van Hoof M, Zerka F, Miraglio B, Primakov S, Postma AA, Bruintjes TD, Bilderbeek MAL, Author details Sebastiaan H, Dammeijer PFM, van Rompaey V, Woodruff HC, Vos W, Department of Diagnostic Medicine, Dell Medical School, University of Texas Walsh S, van de Berg R, Lambin P. Deep learning for the fully automated at Austin, 1701 Trinity St., Stop C0200, Austin, TX 78712, USA. Depar tment segmentation of the inner ear on MRI. Sci Rep. 2021;11(1):2885. of Radiology and Radiological Sciences, Vanderbilt University Medical Center, 16. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for Nashville, TN, USA. Department of Pediatrics, Vanderbilt University Medical biomedical image segmentation. In: Proceedings of MICCAI. 2015. Center, Nashville, TN, USA. Division of Diabetes, Endocrinology, and Metabo- 17. Williams JM, Hilmes MA, Archer B, Dulaney A, Du L, Kang H, Russell WE, lism, Department of Medicine, Vanderbilt University Medical Center, Nashville, Powers AC, Moore DJ, Virostko J. Repeatability and reproducibility of TN, USA. Department of Pathology, Immunology, and Microbiology, pancreas volume measurements using MRI. Sci Rep. 2020;10(1):4767. Vanderbilt University, Nashville, TN, USA. Department of Molecular Physiology 18. Chupin M, Gerardin E, Cuingnet R, Boutet C, Lemieux L, Lehericy S, and Biophysics, Vanderbilt University, Nashville, TN, USA. V A T ennessee Valley Benali H, Garnero L, Colliot O. Alzheimer’s disease neuroimaging I: fully Healthcare System, Nashville, TN, USA. Livestrong Cancer Institutes, University automatic hippocampus segmentation and classification in Alzheimer’s of Texas at Austin, Austin, TX, USA. Department of Oncology, University disease and mild cognitive impairment applied on data from ADNI. Hip- of Texas at Austin, Austin, TX, USA. Oden Institute for Computational Engi- pocampus. 2009;19(6):579–87. neering and Sciences, University of Texas at Austin, Austin, TX, USA. 19. Liu KL, Wu T, Chen PT, Tsai YM, Roth H, Wu MS, Liao WC, Wang W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pan- Received: 1 September 2021 Accepted: 10 December 2021 creatic tissue: a retrospective study with cross-racial external validation. Lancet Digit Health. 2020;2(6):e303–13. 20. Virostko J, Craddock RC, Williams JM, Triolo TM, Hilmes MA, Kang H, Du L, Wright JJ, Kinney M, Maki JH, et al. Development of a standard- ized MRI protocol for pancreas assessment in humans. PLoS One. References 2021;16(8):e0256029. 1. Garcia TS, Rech TH, Leitao CB. Pancreatic size and fat content in diabetes: 21. Ferrari E, Bosco P, Spera G, Fantacci ME, Retico A. Common pitfalls in a systematic review and meta-analysis of imaging studies. PLoS One. machine learning applications to multi-center data: tests on the ABIDE I 2017;12(7):e0180911. and ABIDE II collections. In: Joint annual meeting ISMRM-ESMRMB; 2018. 2. Campbell-Thompson ML, Filipp SL, Grajo JR, Nambam B, Beegle R, 22. Liu Y, Basty N, Whitcher B, Bell JD, Sorokin EP, van Bruggen N, Thomas EL, Middlebrooks EH, Gurka MJ, Atkinson MA, Schatz DA, Haller MJ. Relative Cule M. Genetic architecture of 11 organ traits derived from abdominal pancreas volume is reduced in first-degree relatives of patients with type MRI using deep learning. Elife. 2021;10:e65554. 1 diabetes. Diabetes Care. 2019;42(2):281–7. 3. Virostko J, Williams J, Hilmes M, Bowman C, Wright JJ, Du L, Kang H, Rus- Publisher’s Note sell WE, Powers AC, Moore DJ. Pancreas volume declines during the first Springer Nature remains neutral with regard to jurisdictional claims in pub- year after diagnosis of type 1 diabetes and exhibits altered diffusion at lished maps and institutional affiliations. disease onset. Diabetes Care. 2019;42(2):248–57. 4. Al-Mrabeh A, Hollingsworth KG, Shaw JAM, McConnachie A, Sattar N, Lean MEJ, Taylor R. 2-year remission of type 2 diabetes and pancreas morphology: a post-hoc analysis of the DiRECT open-label, cluster- randomised trial. Lancet Diabetes Endocrinol. 2020;8(12):939–48. Re Read ady y to to submit y submit your our re researc search h ? Choose BMC and benefit fr ? Choose BMC and benefit from om: : 5. Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK (2018) Medical image analysis using convolutional neural networks: a review. J fast, convenient online submission Med Syst. 42(11):1–13. 6. Krishnamurthy S, Srinivasan K, Qaisar SM, Vincent PMDR, Chang CY. thorough peer review by experienced researchers in your field Evaluating deep neural network architectures with transfer learning for rapid publication on acceptance pneumonitis diagnosis. Comput Math Methods in Med. 2021;(4):1–12. support for research data, including large and complex data types 7. Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks. • gold Open Access which fosters wider collaboration and increased citations Med Image Comput Comput Assist Interv. 2016;9901:442–450. maximum visibility for your research: over 100M website views per year 8. Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ, Lyu I, Assad A, Abramson RG, Hilmes MA et al. Fully convolutional neural networks At BMC, research is always in progress. improve abdominal organ segmentation. Proc SPIE Int Soc Opt Eng. 2018;10574:105742V. Learn more biomedcentral.com/submissions

Journal

BMC Medical ImagingSpringer Journals

Published: Jan 5, 2022

Keywords: Automatic segmentation; Auto-segmentation; Semantic; T1D; MRI; Neural network; Machine learning; Artificial intelligence; Size

References