Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A blind medical image denoising method with noise generation network

A blind medical image denoising method with noise generation network BACKGROUND:In the process of medical images acquisition, the unknown mixed noise will affect image quality. However, the existing denoising methods usually focus on the known noise distribution.OBJECTIVE:In order to remove the unknown real noise in low-dose CT images (LDCT), a two-step deep learning framework is proposed in this study, which is called Noisy Generation-Removal Network (NGRNet).METHODS:Firstly, the output results of L0 Gradient Minimization are used as the labels of a dental CT image dataset to form a pseudo-image pair with the real dental CT images, which are used to train the noise generation network to estimate real noise distribution. Then, for the lung CT images of the LIDC/IDRI database, we migrate the real noise to the noise-free lung CT images, to construct a new almost-real noisy images dataset. Since dental images and lung images are all CT images, this migration can be achieved. The denoising network is trained to realize the denoising of real LDCT for dental images by using this dataset but can extend for any low-dose CT images.RESULTS:To prove the effectiveness of our NGRNet, we conduct experiments on lung CT images with synthetic noise and tooth CT images with real noise. For synthetic noise image datasets, experimental results show that NGRNet is superior to existing denoising methods in terms of visual effect and exceeds 0.13dB in the peak signal-to-noise ratio (PSNR). For real noisy image datasets, the proposed method can achieve the best visual denoising effect.CONCLUSIONS:The proposed method can retain more details and achieve impressive denoising performance. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of X-Ray Science and Technology IOS Press

A blind medical image denoising method with noise generation network

Loading next page...
 
/lp/ios-press/a-blind-medical-image-denoising-method-with-noise-generation-network-WXJ4qKnsrn

References (12)

Publisher
IOS Press
Copyright
Copyright © 2022 © 2022 – IOS Press. All rights reserved
ISSN
0895-3996
eISSN
1095-9114
DOI
10.3233/xst-211098
Publisher site
See Article on Publisher Site

Abstract

BACKGROUND:In the process of medical images acquisition, the unknown mixed noise will affect image quality. However, the existing denoising methods usually focus on the known noise distribution.OBJECTIVE:In order to remove the unknown real noise in low-dose CT images (LDCT), a two-step deep learning framework is proposed in this study, which is called Noisy Generation-Removal Network (NGRNet).METHODS:Firstly, the output results of L0 Gradient Minimization are used as the labels of a dental CT image dataset to form a pseudo-image pair with the real dental CT images, which are used to train the noise generation network to estimate real noise distribution. Then, for the lung CT images of the LIDC/IDRI database, we migrate the real noise to the noise-free lung CT images, to construct a new almost-real noisy images dataset. Since dental images and lung images are all CT images, this migration can be achieved. The denoising network is trained to realize the denoising of real LDCT for dental images by using this dataset but can extend for any low-dose CT images.RESULTS:To prove the effectiveness of our NGRNet, we conduct experiments on lung CT images with synthetic noise and tooth CT images with real noise. For synthetic noise image datasets, experimental results show that NGRNet is superior to existing denoising methods in terms of visual effect and exceeds 0.13dB in the peak signal-to-noise ratio (PSNR). For real noisy image datasets, the proposed method can achieve the best visual denoising effect.CONCLUSIONS:The proposed method can retain more details and achieve impressive denoising performance.

Journal

Journal of X-Ray Science and TechnologyIOS Press

Published: Apr 15, 2022

There are no references for this article.