Access the full text.
Sign up today, get DeepDyve free for 14 days.
Precise segmentation of lung parenchyma is essential for effective analysis of the lung. Due to the obvious contrast and large regional area compared to other tissues in the chest, lung tissue is less difficult to segment. Special attention to details of lung segmentation is also needed. To improve the quality and speed of segmentation of lung parenchyma based on computed tomography (CT) or computed tomography angiography (CTA) images, the 4th International Symposium on Image Computing and Digital Medicine (ISICDM 2020) provides interesting and valuable research ideas and approaches. For the work of lung parenchyma segmentation, 9 of the 12 participating teams used the U-Net network or its modified forms, and others used the methods to improve the segmentation accuracy include attention mechanism, multi-scale feature information fusion. Among them, U-Net achieves the best results including that the final dice coefficient of CT segmentation is 0.991 and the final dice coefficient of CTA segmentation is 0.984. In addition, attention U-Net and nnU-Net network also performs well. In this paper, the methods chosen by 12 teams from different research groups are evaluated and their segmentation results are analyzed for the study and references to those involved.
Journal of X-Ray Science and Technology – IOS Press
Published: Oct 29, 2021
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.