Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion
Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion
Li, Tianyi;Wei, Wei;Cheng, Lidan;Zhao, Shengjie;Xu, Chuanjun;Zhang, Xia;Zeng, Yi;Gu, Jihua
2021-03-05 00:00:00
Hindawi Journal of Healthcare Engineering Volume 2021, Article ID 6649591, 11 pages https://doi.org/10.1155/2021/6649591 Research Article Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion 1 1 1 2 3 Tianyi Li , Wei Wei , Lidan Cheng , Shengjie Zhao , Chuanjun Xu, 4,5 4 1 Xia Zhang , Yi Zeng, and Jihua Gu College of Optoelectronic Science and Engineering, Soochow University, Suzhou, Jiangsu 215006, China MeBotX Intelligent Technology (Suzhou) Co. Ltd., Suzhou, Jiangsu 215000, China -e Department of Radiology, -e Second Hospital of Nanjing, Affiliated Hospital Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210003, China -e Department of Tuberculosis, -e Second Hospital of Nanjing, Affiliated Hospital Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210003, China -e Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, Jiangsu 211166, China Correspondence should be addressed to Wei Wei; weiwei0728@suda.edu.cn and Xia Zhang; zhangxia365@sina.com Received 19 October 2020; Revised 4 December 2020; Accepted 30 January 2021; Published 5 March 2021 Academic Editor: Daniele Cafolla Copyright © 2021 Tianyi Li et al. 3is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Coronavirus disease (COVID-19) is highly contagious and pathogenic. Currently, the diagnosis of COVID-19 is based on nucleic acid testing, but it has false negatives and hysteresis. 3e use of lung CT scans can help screen and effectively monitor diagnosed cases. 3e application of computer-aided diagnosis technology can reduce the burden on doctors, which is conducive to rapid and large-scale diagnostic screening. In this paper, we proposed an automatic detection method for COVID-19 based on spatio- temporal information fusion. Using the segmentation network in the deep learning method to segment the lung area and the lesion area, the spatiotemporal information features of multiple CTscans are extracted to perform auxiliary diagnosis analysis. 3e performance of this method was verified on the collected dataset. We achieved the classification of COVID-19 CT scans and non- COVID-19 CT scans and analyzed the development of the patients’ condition through the CT scans. 3e average accuracy rate is 96.7%, sensitivity is 95.2%, and F1 score is 95.9%. Each scan takes about 30 seconds for detection. chain reaction (RT-PCR) to determine the presence of viral 1. Introduction nucleic acid [6]. As a disease diagnosis, especially infectious From the end of 2019, coronavirus disease (COVID-19) has diseases, the final diagnosis still needs to rely on the etiology. disseminated around the world and become a global chal- Although RT-PCR is considered the gold standard for lenge, leading the World Health Organization to declare the COVID-19 diagnosis, there are still some influencing fac- COVID-19 outbreak a pandemic [1–3]. Up to now, no tors, such as the degree of standardization of sample col- clinically approved therapeutic is available for treatment [4]. lection and the time of sample collection [7]. Also, whether 3e findings showed that COVID-19 virus spreads from RT-PCR can detect COVID-19 depends on the viral load. If person to person. It is necessary to block the spread of the sampling site does not contain viruses nor has a low viral COVID-19 by isolating patients, tracing, and isolating close load, the nucleic acid test will be prone to false negatives. contacts [5]. 3erefore, a study of a timely and effective Since some cases have imaging features, but nucleic acid diagnosis method that can quickly screen as many scans as detection has hysteresis, medical imaging methods (such as possible is needed. chest X-ray (CXR) and computer tomography (CT)) can At present, the diagnosis of COVID-19 mainly depends play a significant role in the diagnosis of COVID-19 [8, 9]. on the nucleic acid kit for reverse transcription-polymerase Besides, nucleic acid testing can only diagnose whether a 2 Journal of Healthcare Engineering condition, and determining the diagnosis and patient has COVID-19. But it cannot judge the condition, while medical imaging can [10]. For patients with COVID- treatment. 19, accurate monitoring of disease progression is a vital Experimental results show that the auxiliary diagnosis component of disease management. For suspected cases, method has good detection and classification effects. It can such as close contacts of COVID-19 patients whose nucleic visually display the disease development and assist doctors acid test is negative, imaging can be used for monitoring in clinical diagnosis and treatment. [8, 9]. In general, medical imaging methods are effective means to diagnose COVID-19 and monitor disease progression. 1.1. Related Work. 3e computer-aided diagnosis system Real-time analysis of the patient’s condition is necessary for uses imaging, medical image processing technology, and doctors to determine effective treatment methods. Accurate other means combined with computer analysis and calcu- and quantitative analysis of the disease can help doctors lation to assist in diagnosis. Many applications have been prescribe the right medicine. proposed in medical imaging, including segmentation and Traditional imaging diagnosis depends on the experience characterization tasks. of doctors. COVID-19 is a new type of infectious disease, Convolutional neural network (CNN) is developed for and current research has summarized the imaging charac- the detection of breast cancer [11], brain tumor [12], pul- teristics of this type of disease [9]. Usually, one CT scan monary nodules [13], intracranial aneurysm [14], and other contains multiple slices. It takes 5–15 minutes for doctors to diseases [15]. Usually, a two-step approach is adopted, first examine one CTscan. Repetitive work will cause the doctor’s determining the area of interest and then reducing false mental fatigue. Rapid and large-scale detection and positives [15]. screening cannot be performed. Doctors can only use Chung et al. [16] gave a more detailed description of the subjective judgments to analyze the development of patients’ COVID-19 CT scans. 3ese CT scans show an extent of conditions, which cannot be intuitive and quantitative. irregular ground-glass opacities that progress rapidly after In recent years, deep learning has achieved great success COVID-19 symptom onset [16, 17]. In the early stage of the in the area of computer vision, which provides new solutions disease, CT images show image features of multiple small to the automated processing of medical images [11–15]. patches and interstitial changes. 3en, it develops multiple Artificial intelligence technologies, especially deep learning ground glass shadows and infiltration shadows of the lungs. tools, can be developed to help radiologists perform data In severe cases, lung consolidation may occur, and pleural classification, quantification, and trend analysis. If the CT effusions are rare [18]. scan shows the possibility of disease, the case can be marked Fang et al. [19] compared the sensitivity of chest CT for further examination by a radiologist or clinician for detection with nucleic acid detection by RT-PCR. 51 patients possible treatment or quarantine. Computer-aided diagnosis received initial and repeated RT-PCR tests. 3eir standard is (CAD) system based on CT scans can help doctors diagnose the diagnosis of COVID-19 infection finally confirmed by COVID-19 and better understand disease development. It is serial RT-PCR testing. In this patient sample, the detection worth noting that CAD technology cannot replace doctors rate for initial CT (50 of 51 patients (98%); 95% CI: 90%, or other medical professionals, the final diagnosis must be 100%) was greater than that for first RT-PCR (36 of 51 judged by professionals. patients (71%); 95% CI: 56%, 83%). Xie et al. [20] also have In summary, nucleic acid detection has a certain mis- reported a lack of sensitivity in the initial RT-PCR test. diagnosis rate and hysteresis and requires a certain detection Bernheim and Huang [21] studied 121 cases of chest CT time [10]. CT scans of the lungs can provide rapid auxiliary studies obtained in the early, middle, and late infections of diagnosis and monitor the condition of the disease. But four centers in China. Studies have shown that the ap- doctors need to spend a lot of energy to interpret the CT pearance of frosted glass on both sides and surrounding slices, especially in areas with severe epidemics, requiring lungs is characteristic of the disease. large-scale rapid screening. In response to the above- Based on these image features shown in Figure 1, a few mentioned problems of the COVID-19 diagnosis and de- studies have already reported deep learning to diagnose tection, we proposed a method for the assisted diagnosis of COVID-19 pneumonia on chest radiograph or CT. COVID-19 based on CT scans. 3is method is based on the Kassania et al. [22] compared popular deep learning- spatiotemporal sequence information of CT scans to realize based feature extraction frameworks for automatic COVID- the detection and analysis of COVID-19 scans. Its contri- 19 classification. 3ey tested the combination of different butions are as follows: deep learning networks combined with machine learning (1) Using the fast and effective segmentation network methods for classification. Experimental results show that LinkNet and training the false positive network for the DenseNet121 feature extractor with the bagging tree removing lesions based on the DenseNet network classifier achieved the best performance with 99% classifi- structure to achieve accurate segmentation of the cation accuracy. lesion area; Fei et al. [23] developed a deep learning- (DL-) based (2) Combining the characteristics of the spatiotemporal of segmentation system with a human-in-the-loop (HITL) CT scans, effectively monitoring the disease develop- strategy to assist radiologists for infection region segmen- ment, assisting doctors intuitively understanding the tation. By comparing the automatically divided infection Journal of Healthcare Engineering 3 Figure 1: 3e lesion area of COVID-19 on the CT images. area and the manually divided area, the average similarity judge based on a single slice. Especially when the slice is in coefficient is about 91.6%. doubt, the slices before and after will affect the judgment. Hemdan et al. [24] developed COVIDX-Net for diag- What’s more, in addition to the study of different patients, the analysis of CT scans of one patient during the treatment nosing COVID-19 in X-ray Images. 3e authors conducted a comparative study of different deep learning architectures. also plays an important role for the doctor to judge the development of the disease and the effectiveness of the 3e dataset includes 50 X-ray images, divided into 25 non- COVID-19 images and 25 COVID-19 images. Experimental treatment method. results demonstrated VGG19 and DenseNet201 models achieved the best performance scores among similar models, 2. Methods with F1 scores of 0.89 and 0.91 for non-COVID-19 and 3is study was mainly divided into two parts: COVID-19 COVID-19, respectively. However, the dataset used in the classification and detection experiment based on sequence experiment is small. feature of CT scan to classify and detect; COVID-19 volume Gozes et al. [25] presented a system that utilizes 2D and measurement experiment based on the CT scans obtained 3D deep learning models. By modifying and adapting during one patient’s treatment. By measuring the volume of existing AI models (RAD Logics Inc., Boston), this study the lesion and fusing time information of the CT scans, we demonstrated that rapidly developed AI-based image can intuitively quantify the development of the disease and analysis can achieve high accuracy in the detection of analyze the patient’s condition. 3e overall flow chart is coronavirus as well as quantification and tracking of disease shown in Figure 2. burden. Basu et al. [26] proposed a new concept called domain extension neural network to solve the problem that the 2.1. Preprocessing available COVID-19 data are rare and not easy to train. 3e overall accuracy was 95.3%± 0.02. 2.1.1. Dataset. 3is experiment collected 445 lung CT scans Maghdid et al. [27] used the deep learning method and of COVID-19 and 63 healthy lung CT scans from Nanjing transfer learning strategies to diagnose COVID-19 auto- Infectious Diseases Hospital (the Second Hospital of matically. 3e structure is a combination of CNN structure Nanjing). 3e COVID-19 CT scans were from 142 patients. and an improved AlexNet structure. 3e improved archi- Each patient took several times of chest CT scans during tecture accuracy reaches 94.10% on the X-rays and CT slice their treatment, and the CT slice thickness was 0.625 mm to dataset. 1.250 mm. Nanjing Infectious Diseases Hospital is a des- Hasan et al. [28] presented a promising technique for ignated hospital for COVID-19 in Jiangsu Province. 3e use predicting COVID-19 patients from the CTscan using CNN. of data was approved by the Ethics Society and was only used 3e approach based on DenseNet is the updated CNN ar- for this experimental study. 3e patient’s information was chitecture in the present state to detect COVID-19. 3e kept confidential. 3e 445 cases we collected included results outperformed 92% accuracy, with 95% recall. various stages of disease development, and each scan con- At present, there are some studies of lung CT detection, tains hundreds of slices. Also, we selected 170 lung CT scans most of them use a single CT slice, such as [28], and the randomly from the online public dataset LUNA16 [29] for sequence features of CT scans are not fully utilized. In fact, the COVID-19 classification experiment as negative samples during the doctor’s diagnosis process, the doctor will not to form the dataset. So, the total datasets contain 445 4 Journal of Healthcare Engineering Patient X ... CT scan 1 CT scan 2 CT scan n Lesion Lesion Lesion ... segmentation segmentation segmentation Volume Volume Volume Classification Classification Classification measurement measurement measurement Analysis of patient X condition Figure 2: Overall experiment flow chart. COVID-19 scans and 233 non-COVID-19 scans. 3e data FN (false negative): the number of negative instances from LUNA16 were reprocessed, the HU value was adjusted that predicted correctly all evaluation metrics calcu- to the range of −1200∼600, it was set to −1200 if it is less than lated as follows: −1200, it was set to 600 if it is greater than 600, and then it TP was normalized to 0 ∼255. 3e CT slice size is 512 512 (1) precision � , TP + FP pixels. TP (2) recall � , TP + FN 2.1.2. Experiment Condition. 3e Windows-based computer system used for this work had an Intel(R) Core(TM) i7- TP + TN 8700K 3.7 GHz processor with 16 GB RAM. 3e training and (3) accuracy � , TP + TN + FP + FN testing process of the proposed architecture for this ex- periment was implemented in Python using Pytorch recall∗ precision backend as the deep learning framework backend running F1 score � 2∗ . (4) recall + precision on NVIDIA GeForce GTX 1080 Ti GPU. 2.2. COVID-19 Classification. 3e rapid COVID-19 detec- 2.1.3. Evaluation Criteria. Taking into account the un- tion was based on the sequence features of COVID-19 CT evenness of the data, a single verification indicator may not scans. 3e flow chart is shown in Figure 3. 3ere are three be able to summarize the performance of the algorithm. We steps in the experiment: lung area segmentation, lesion area utilized a variety of common evaluation metrics such as segmentation, and classification. 3e lesion area segmen- precision (PRE), recall (REC), accuracy (AUC), and F1 score tation step includes the false positive screening of the lesion (F1). area. 3e lung area and the lesion area obtained during the Precision: among all the samples judged to be correct, it detection process can be used in lesion volume measurement is the correct proportion experiments. Recall: among all the positive samples, it is the pro- portion of correct judgment 2.2.1. Lung Segmentation. In the original CT slice, there are F1 score: comprehensive performance indicators are other surrounding tissue parts besides the lung area we need. concerned about the accuracy of positive samples and Too much redundant information is in the picture, which their recall will interfere with training and testing. 3erefore, we first TP (true positive): the number of instances that cor- segmented the lung area. rectly predicted Previous studies have shown that U-net can be trained end- TN (true negative): the number of instances that in- to-end from very few images and achieve excellent performance correctly predicted [30]. So, the U-net has become the most popular base network FP (false positive): the number of negative instances widely used in biomedical image segmentation. To speed up the that predicted correctly training and processing of the network, we chose the LinkNet Journal of Healthcare Engineering 5 Training lesion Training lung segmentation Training false positive screening segmentation Negative lung Positve lung Positive lesion sample Negative lesion sample segmentation sample segmentation sample (b) Image threshold and normalization Features1 Features2 Features3 Input Features4 Features5 Features6 Output: COVID-19/normal Classifier: decision tree Extract features (a) (c) Figure 3: Flowchart of the proposed framework for computer-aided COVID-19 diagnosis. (a) Image preprocessing. (b) Training lung segmentation, training lesion segmentation, and training DenseNet for false positive screening. (c) Extract features for classification. network structure. 3e LinkNet network is a variant of the 3e training steps of the lung segmentation model are U-net [31] and is a typical encoder-decoder structure. 3e shown in Figure 5. Using pretrained models for testing, we encoder is used for feature extraction and dimension reduction found that when the CT slice contains ground glass shadow of the input images, while the decoder will restore the feature in the lung area, especially the ground glass shadow in the information into an image. 3e encoder and decoder con- lung edge area, the model could not segment the lung area nection structure is shown in Figure 4. 3e encoder structure accurately. To optimize the network, take the CT slices from uses residual connections. 3e feature map after introducing the 20 scans of COVID-19 and 10 scans of non-COVID-19 residual is more sensitive to the changes in output, and the randomly as the input, supplement, and correct the label of gradients are easier to train. 3e learning features of encoder lung region obtained by the test to get their integral lung block i from shallow to deep can be expressed as follows: label images. 3en, the 20 scans in pretrained and 30 scans in the test with their label images are used as the input of the EB � E E e + e + E e + e , i � 1, 2, 3, 4, (5) i i2 i1 i i i1 i i segmentation network to improve the robustness and reli- ability of the model. Finally, we obtained a retrained lung where E (e ) is the result after weighted convolution, EB is i1 i i segmentation model and the lung area of other slices ob- the output of the encoder block i, and also the input of the tained through the model test. encoder block i + 1, e is the input of the encoder block i. In order to verify the effectiveness of the segmentation 3e encoder block i and decoder block i are directly method used in this article, 10 scans were randomly selected connected to improve accuracy and reduce processing time for lung segmentation test, of which 6 were COVID-19 scans [31]. and 4 were non-COVID-19 scans. Among them, the COVID-19 CT scans contain imaging features, and the le- DB � D d , (6) i i i sions are distributed on the periphery of both lungs. 3ese 10 scans were only tested for segmentation, and the model was d � DB + e . (7) i−1 I i not modified by them, so they continued being used in the 3e decoder block structure shown in Figure 4 can be next experiment. 3e results are shown in Table 1, where M1 expressed as equation (6), where the D (d ) is the result after is the initial training model, and M2 is the model trained by i i weighted convolution, DB is the output of the decoder block adding modified supplementary marks and unprocessed i, and d is the input of the decoder block i. 3e input of images. IOU (intersection over union) value is used for encoder block i − 1 can be expressed as formula (7). evaluation, as shown in equation (8), where Area is the mask 6 Journal of Healthcare Engineering Encoderblock i ei Decoderblock i Conv [(3 3), /2] ∗ DBi Conv [(3 3)] Ei1 (ei) ∗ Conv [(1 1)] ∗ ∗ Full-Conv [(3 3), 2] Di (di) Conv [(1 1)] Ei1 (ei) + ei Conv [(3 3)] di Conv [(3 3)] Ei2 (Ei1 (ei) + ei) EBi Figure 4: 3e structure of LinkNet. Test Supplementary Add Output Input data label input Test Retrain Test in Train new New pretrained model of model of network LinkNet LinkNet model of LinkNet Figure 5: Training process of the lung segmentation model. Table 1: Test results of lung segmentation model. Area ∩ Area mask test IOU � . (8) Area ∪ Area Cases Label M1-IOU M2-IOU mask test Eg1 P 0.967 0.978 Eg2 P 0.934 0.942 Eg3 P 0.906 0.921 2.2.2. Lesion Segmentation. 3e lung segmentation network Eg4 P 0.973 0.979 training scans were also used as the input of the lesion Eg5 P 0.926 0.926 segmentation network. 3e lesions of the COVID-19 are Eg6 P 0.924 0.952 mainly ground glass shadows. We invited many professional Eg7 N 0.978 0.979 doctors from the Second Hospital of Nanjing to mark the Eg8 N 0.983 0.984 lesion. Based on the abovementioned test for segmentation Eg9 N 0.825 0.825 network, the lesion segmentation network also applies the Eg10 N 0.882 0.884 LinkNet model. 3e lung segment was tested using the trained lesion area of the marked target area, Area is the area of the segmentation network, and the lesion regions in the rest of test tested target area. By supplementing the training data, we 628 scans were segmented, and then we cropped each lesion improved the lung area division and divided the ground area. 3e test process of lesion segmentation is shown in glass shadow in the edge area correctly. Figure 6. We found some negative image pieces in the Journal of Healthcare Engineering 7 segmented lesion area, which needs to be screened for false lung, and the corresponding lesion volume cannot be positives. accurately measured. To simplify the calculation, do not perform three-dimensional reconstruction of the image Since DenseNet performed excellently in object recog- nition [32], it has also been proved useful for COVID-19 sequence, using the image sequence directly to convert the image classification in previous research [22, 24], so we use calculation of the three-dimensional lesion volume into this network to train false positives screening. 3ere may be the calculation of the two-dimensional lesion area. Cal- multiple lesion areas detected in one slice, so it was necessary culating the sum of the pixel area of the lung area and the to cut into lesion area blocks according to the mask area and lesion area of all slices in each CT scan to obtain the determine whether each lesion area block was a real lesion. proportion of the lesion volume and the lung volume, as Due to the limited data, we selected 10 positive scans and shown in formula (9), where Area is the sum of the Lesion 11 negative scans randomly for training. 3en, we resized the pixel area of the lesion area, Area is the sum of the Lung lesion area blocks to 64 64. Even during the detection of pixel area of the lung area. By calculating the ratio, we can positive sections, some negative lesions may be included. So, solve the problem that the basic difference of lung volume we needed to filter them out before training. To improve the between different patients makes it impossible to use the generalization of the training model, we used data aug- same quantitative standard to judge. mentation for small samples. 3e data augmentation tech- Area Lesion nique is a widely used method for training models to Per � . (9) Area Lung increase training benefits and decrease the effect of network regularization. All the data were augmented by horizontal and vertical flip, width and height shift, and rotation with ° ° ° 3. Results and Discussion angles of 90 ,180 , and 270 , so that the number of training data expanded about fivefold. 3.1. Classification Result. We tested on the collected dataset using the abovementioned experimental methods. Due to data imbalance, we referenced the five-fold cross-validation 2.2.3. Feature Extraction and Classification. 3e decision method and divided the remaining 607 data into 5 groups, tree method in machine learning was used for final classi- and each group of data is composed as shown in Table 4. fication. In the previous steps, we obtained the lesion area in P_Num is the number of COVID-19 CT scans, N_Num is each CT slice. But it is not reliable to detect one slice to the number of non-COVID-19 CT scans, and A_Num is the represent the entire scan. 3erefore, we chose 8 overall number of total CT scans. features of the CT scan, the features are shown in Table 2. Randomly taking three groups of data for training and 3en, we used the decision tree for training classification and the remaining two groups of data for testing, the accuracy, testing. precision, sensitivity, and F1-score are calculated. A total of 3e training and testing datasets have 607 scans, except 10 datasets were formed for training and testing. 3e results the 50 scans used in lung segmentation and the other 21 of the 10 sets of data are shown in Table 5. We use 95% CI scans used in lesion segmentation, including 415 COVID-19 (confidence interval) on the obtained datasets, we get the scans and 192 non-COVID-19 scans. 3e training set and average accuracy of 94.4% (95%CI: 91.6%–97.2%), precision test set were divided according to a ratio of 6 : 4. 96.7% (95%CI: 94.5%–98.9%), recall 95.2% (95%CI: 92.5%– 97.9%), and F1-score 96.0%. 2.2.4. Model Parameters. 3e parameters used in each Partial entry in Table 5 is in the format (µ ± σ) where µ is model for training are shown in Table 3. the average value, and 95%CI is (µ− σ) to (µ+σ ). At the same time, we tested the time used for each part of the algorithm. Due to the different number of sequence 2.3. COVID-19 Volume Measurement. In CT scan diagnosis, images, the time used for detection will change accordingly. doctors can analyze the patients’ condition according to the In the test, when the average number of slices in the test scan lesion changes. COVID-19 has different imaging manifes- is 103, the average time of the key part of the algorithm is tations due to the disease development, and the most in- shown in Table 6. tuitive manifestation is the change in lesion volume. By According to the abovementioned experimental steps, analyzing all the CT scans of one patient, we could judge the the automatic diagnosis system of COVID-19 based on CT disease development according to the changes in the lesion scan was integrated, and we made a software of COVID-19 volume. auxiliary diagnosis based on C++ and Libtorch. 3e time of In this COVID-19 volume measurement experiment, the software to detect one CT scan is calculated. In 60 CT according to the CT scans during the patient’s treatment, scans, the average number of images is 100, and the average based on time information of each scan, the lesion volume detection time per scan is 28.78 s. was calculated to assist doctors in quantifying the condition and analyzing the development. In the classification ex- periment, we have already obtained the lung area and the lesion area of the patient image for this experiment. 3.2. Volume Measurement Result. 3e scans of 142 patients As the lung volume changes with breathing, it is from the hospital were collected in this experiment, and each impossible to simply obtain an accurate volume of the patient contains images with varying times of detections. 8 Journal of Healthcare Engineering Output Crop mask Crop image Input Test Lesion segmentation model of LinkNet Figure 6: Test process of lesion segmentation. Table 2: Features of decision tree. Feature Definition Slice_Num 3e number of slices with the lesion area Lesion_AreaSum 3e total area of lesion area Lesion_AeraMax 3e largest lesion area Lesion_MaxPosition 3e position of the slice with the largest lesion in the CT scan Slice_NumPercent 3e ratio of the number of slices with lesions to the total number of slices Lesion_MaxPercent 3e ratio of the largest lesion area to the lung area of that slice Lesion_SumPercent 3e ratio of the sum of lesion area to the sum of lung area in the slice with lesion Lesion_AllSumPercent 3e ratio of the total area of the lesion to the total area of the lung Table 3: Parameters of models. Model Parameter Value Batch size 16 Epoch 100 Segmentation model Loss function BCEWithLogitsLoss Optimizer Adam −3 Learning rate 10 Batch size 128 Epoch 200 ∗ −3 False positive screening model Learning rate 5 10 Loss function Cross-entropy loss Optimizer SGD Criterion “Gini” Decision tree model Class_weight “Balanced” Splitter “Best” Table 4: Composition of 5 groups of data. Set A-Num P-Num N-Num X1 122 83 39 X2 122 83 39 X3 122 83 39 X4 121 83 38 X5 120 83 37 ALL 607 415 192 Take one patient’s scans as an example. As shown in percentage. Doctors could visually see the changes in the Figure 7, it shows that the patient has 12 times of detection lesion volume according to the data line chart shown in between January and April. 3e abscissa in Figure 7 shows Figure 7. 3e patient has gone through a period of rapid the detection date of the CT scans. 3e ordinate shows the development from the onset of COVID-19 and hospitali- zation and has gradually improved after treatment. In proportion of the lung area and the lesion area in the scan. For the convenience of the display, it was plotted as a general, doctors can intuitively judge the disease Journal of Healthcare Engineering 9 Table 5: 3e test result of the 10 datasets. Data Test AUC (95%CI) Pre (95%CI) Rec (95%CI) F1 Dataset1 X4X5 93.8± 3.0 95.2± 2.7 95.9± 2.5 0.955 Dataset2 X3X5 95.0± 2.7 98.1± 1.7 94.6± 2.8 0.963 Dataset3 X3X4 95.5± 2.6 96.4± 2.3 97.0± 2.1 0.967 Dataset4 X2X5 93.0± 3.2 96.9± 2.2 92.8± 3.3 0.948 Dataset5 X2X4 94.7± 2.8 95.8± 2.5 96.4± 2.3 0.961 Dataset6 X2X3 92.6± 3.3 96.8± 5.5 92.2± 3.3 0.944 Dataset7 X1X5 95.0± 2.7 96.4± 2.3 96.4± 2.3 0.964 Dataset8 X1X3 96.3± 2.7 97.6± 2.3 97.0± 2.3 0.973 Dataset9 X1X2 94.7± 2.4 97.5± 1.9 94.6± 2.1 0.960 Dataset10 X1X4 95.1± 2.8 96.4± 2.0 96.4± 2.8 0.964 Avg. 94.6± 2.8 96.7± 2.2 95.3± 2.7 0.960 Table 6: Algorithm time for each part. Algorithm part Time (s) Lung segmentation 5.259 Lesion segmentation 4.727 Remove false positives 4.189 Feature extraction 6.961 Decision tree classification 0.082 Total 21.218 1.8000 1.5851 1.4336 1.6000 1.3566 1.4000 1.1800 1.2000 0.8927 0.8581 1.0000 0.8000 0.5086 0.4456 0.6000 0.4000 0.2000 0.0150 0.0194 0.0186 0.0090 0.0000 Detection date Figure 7: 3e trend of the lesion volume of one patient’s multiple detection. development and treatment effect based on the measure- the classification of COVID-19 and non-COVID-19 on the ment and analysis of the patients’ CT scans. collected datasets. We use the LinkNet network to train the lung and lesion segmentation network and the DenseNet network to train the false positive screening network. 4. Conclusions Considering that the relationship of the features between the In conclusion, the rapid and effective diagnosis and disease CT slices will affect the judgment of classification, we development analysis of COVID-19 is important in the extracted the sequence features of CT scans instead of the current situation where COVID-19 is still spreading. Nucleic features of one single slice. 3e decision tree method is used acid detection has false negative and hysteresis. Also, it for classification and by quantifying the lesion volume of the cannot judge the severity of the condition. Lung CT scans CT scan and fusing time information, we realized the can provide auxiliary diagnosis and monitor the disease computer-aided diagnosis of COVID-19. progression. To assist doctors in realizing rapid diagnosis Experimental results show the following: and rapid interpretation of lung CT scans, this paper pro- (1) 3e result on the obtained dataset gets an average posed an automatic COVID-19 detection method based on accuracy of 94.4%, precision of 96.7%, recall of spatiotemporal information fusion. It analyzes the spatial 95.2%, and F1 score of 96.0%. characteristics of CT scans to assist doctors in COVID-19 diagnosis and fuses time information of the scans to assist (2) Analysis of the CT scans from the patient during his doctors in quantifying the patient’s condition. We achieved treatment can intuitively quantify the disease Proportion of lesion volume per (%) 2020.01.28 2020.02.01 2020.02.04 2020.02.07 2020.02.10 2020.02.13 2020.02.16 2020.02.19 2020.02.24 2020.03.12 2020.04.08 2020.04.22 10 Journal of Healthcare Engineering review,” Radiology: Cardiothoracic Imaging, vol. 2, no. 1, development and analyze the disease development Article ID e200034, 2020. trend. [8] H. Liu, F. Liu, J. Li, T. Zhang, D. Wang, and W. Lan, “Clinical (3) 3e lung segmentation and lesion segmentation and CT imaging features of the COVID19 pneumonia: focus training methods in this study could be used for on pregnant women and children,” Journal of Infection, segmentation recognition of other diseases (such as vol. 80, no. 5, pp. 7–13, 2020. tumors). 3e lung segmentation network could also [9] M. Chung, A. Bernheim, X. Mei, and N. Zhang, “CT imaging be used for preliminary data processing of diagnosis features of 2019 novel coronavirus (2019-nCoV),” Radiology, of other lung diseases. 3e method could also be vol. 2020, Article ID 200230, 2020. extended to other kinds of medical images. [10] W.-C. Dai, H.-W. Zhang, J. Yu et al., “CT imaging and dif- ferential diagnosis of COVID-19,” Canadian Association of However, it has to be acknowledged that our classifier Radiologists Journal, vol. 71, no. 2, pp. 195–200, 2020. may not be capable of distinguishing non-COVID inter- [11] R. Awan, N. A. Koohbanani, M. Shaban, A. Lisowska, and stitial pneumonia from COVID interstitial pneumonia, N. Rajpoot, “Context-aware learning using transferable fea- whose CT lesion phenotypes are similar. How to distinguish tures for classification of breast cancer histology images,” between COVID-19 and other pneumonia makes our fol- 2018, https://arxiv.org/abs/1803.00386. [12] Z. Xiao, R. Huang, Y. Ding, and T. Lan, “A deep learning- low-up research directions. based segmentation method for brain tumor in MR images,” in Proceedings of the International Conference of Computa- Data Availability tional Advances, Bio & Medical Sciences, Bengaluru, India, March 2016. 3e COVID-19 CT image data used to support the findings [13] J. Liu, F. Chen, C. Pan et al., “A cascaded deep convolutional of this study have not been made available because they neural network for joint segmentation and genotype pre- involve patient privacy. diction of brainstem gliomas,” IEEE Transactions on Bio- medical Engineering, vol. 65, no. 9, pp. 1943–1952, 2018. [14] Y. Zeng, X. Liu, N. Xiao et al., “Automatic diagnosis based on Conflicts of Interest spatial information fusion feature for intracranial aneurysm,” 3e authors declare that there are no conflicts of interest IEEE Transactions on Medical Imaging, vol. 39, no. 5, pp. 1448–1458, 2020. regarding the publication of this paper. [15] D. Nie, R. Trullo, J. Lian et al., “Medical image synthesis with deep convolutional adversarial networks,” IEEE Transactions Acknowledgments on Biomedical Engineering, vol. 65, no. 12, pp. 2720–2730, 3e authors acknowledge the efforts devoted by radiologists of [16] C. Huang, Y. Wang, and X. Li, “Clinical features of patients the Second Hospital of Nanjing to collect, label, and share the infected with 2019 novel coronavirus in Wuhan,” -e Lancet, COVID-19 CT database. 3is work was supported by the vol. 395, no. 10223, pp. 497–506, 2020. Science and Technology Plan Project of Nanjing (ZX20200008) [17] J. Lei, J. Li, X. Li, and X. Qi, “CT imaging of the 2019 novel and 2020 College Student Innovation and Entrepreneurship coronavirus (2019-nCoV) pneumonia,” Radiology, vol. 295, Training Program Project (202010285151E). no. 1, 2020. [18] H. X. Bai, B. Hsieh, Z. Xiong et al., “Performance of radi- ologists in differentiating COVID-19 from non-COVID-19 References viral pneumonia at chest CT,” Radiology, vol. 296, no. 2, p. E46, 2020. [1] N. Zhu, D. Zhang, W. Wang et al., “A novel coronavirus from [19] Y. Fang, H. Zhang, J. Xie, and M. Lin, “Sensitivity of chest CT patients with pneumonia in China, 2019,” New England for COVID19: comparison to RT-PCR,” Radiology, vol. 19, Journal of Medicine, vol. 382, no. 8, pp. 727–733, 2020. Article ID 200432, 2020. [2] J. Cohen and D. Normile, “New SARS-like virus in China [20] X. Xie, Z. Zhong, W. Zhao, C. Zheng, F. Wang, and J. Liu, triggers alarm,” Science, vol. 367, no. 6475, pp. 234-235, 2020. “Chest CT for typical 2019-nCoV pneumonia: relationship to [3] T. Lupia, S. Scabini, S. Mornese Pinna, G. Di Perri, negative RT-PCR testing,” Radiology, vol. 296, no. 2, p. 7, F. G. De Rosa, and S. Corcione, “2019 novel coronavirus (2019-nCoV) outbreak: a new challenge,” Journal of Global [21] X. M. Bernheim and M. Huang, “Chest CT findings in Antimicrobial Resistance, vol. 21, pp. 22–27, 2020. coronavirus disease-19 (COVID19): relationship to duration [4] H. A. Rothan and S. N. Byrareddy, “3e epidemiology and of infection,” Radiology, vol. 295, no. 3, 2020. pathogenesis of coronavirus disease (covid-19) outbreak,” [22] S. H. Kassania, P. H. Kassasnib, and J. Michal, “Automatic Journal of Autoimmunity, vol. 109, Article ID 102433, 2020. detection of coronavirus disease (COVID-19) in X-ray and [5] S. Chavez, B. Long, A. Koyfman, and S. Y. Liang, “Corona- virus disease (covid-19): a primer for emergency physicians,” CT images: a machine learning based approach,” 2020, https:// arxiv.org/abs/2004.10641. -e American Journal of Emergency Medicine, vol. 6757, no. 20, p. 30178, 2020. [23] S. Fei, Y. Gao, J. Wang, and W. Shi, “Lung infection quan- tification of COVID-19 in CT images with deep learning,” [6] P. Huang, T. Liu, L. Huang et al., “Use of chest CT in combination with negative RT-PCR assay for the 2019 novel 2020, https://arxiv.org/abs/2003.04655. [24] E. E.-D. Hemdan, M. A. Shouman, and M. E. Karar, coronavirus but high clinical suspicion,” Radiology, vol. 295, no. 1, pp. 22-23, 2020. “COVIDX-net: a framework of deep learning classifiers to diagnose COVID-19 in X-ray images,” 2020, https://arxiv.org/ [7] M.-Y. Ng, E. YP. Lee, J. Yang, and X. Li, “Imaging profile of the COVID-19 infection: radiologic findings and literature abs/2003.11055. Journal of Healthcare Engineering 11 [25] O. Gozes, M. Frid-Adar, and H. Greenspan, “AI development cycle for the us (COVID-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis,” 2020, https://arxiv.org/abs/2003. [26] S. Basu, S. Mitra, and N. Saha, “Deep learning for screening COVID-19 using chest X-ray images,” 2020, https://arxiv.org/ abs/2004.10507. [27] H. S. Maghdid, A. T. Asaad, K. Z. Ghafoor, A. S. Sadiq, and M. K. Khan, “Diagnosing covid-19 pneumonia from x-ray and ct images using deep learning and transfer learning algo- rithms,” 2020, https://arxiv.org/abs/2004.00038. [28] N. Hasan, Y. Bao, and A. Shawon, “DenseNet convolutional neural networks application for predicting COVID-19 using CT image,” 2020. [29] A. A. A. Setio, A. Traverso, T. de Bel et al., “Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge,” Medical Image Analysis, vol. 42, pp. 1–13, 2017. [30] O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolu- tional networks for biomedical image segmentation,” Lecture Notes in Computer Science, vol. 9351, pp. 234–241, 2015. [31] Chaurasia and E. Culurciello, “LinkNet: exploiting encoder representations for efficient semantic segmentation,” in Proceedings of the International Conference of IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, December 2018. [32] G. Huang, “Densely connected convolutional networks,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
Journal of Healthcare Engineering
Hindawi Publishing Corporation
http://www.deepdyve.com/lp/hindawi-publishing-corporation/computer-aided-diagnosis-of-covid-19-ct-scans-based-on-spatiotemporal-SnSDCSFMCW