Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and Nonmedical Images

Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and... Hindawi Journal of Healthcare Engineering Volume 2021, Article ID 7000991, 17 pages https://doi.org/10.1155/2021/7000991 Research Article Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and Nonmedical Images 1,2 2 3 4 Sarwar Shah Khan , Muzammil Khan , Yasser Alharbi , Usman Haider, 2 5 Kifayat Ullah, and Shahab Haider Department of Software Engineering, University of Sialkot, Sialkot 51310, Pakistan Department of Computer & Software Technology, University of Swat, Swat 19130, Pakistan College of Computer Science & Engineering, University of Hail, Ha’il, Saudi Arabia Ghulam Ishaq Khan Institute of Engineering Science and Technology, Topi Swabi, Pakistan Department of Computer Science, City University of Science and IT, Peshawar, Pakistan Correspondence should be addressed to Muzammil Khan; muzammilkhan86@gmail.com Received 2 May 2021; Accepted 18 October 2021; Published 11 December 2021 Academic Editor: Mian Muhammand Sadiq Fareed Copyright © 2021 Sarwar Shah Khan et al. +is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this study, we introduced a preprocessing novel transformation approach for multifocus image fusion. In the multifocus image, fusion has generated a high informative image by merging two source images with different areas or objects in focus. Acutely the preprocessing means sharpening performed on the images before applying fusion techniques. In this paper, along with the novel concept, a new sharpening technique, Laplacian filter + discrete Fourier transform (LF + DFT), is also proposed. +e LF is used to recognize the meaningful discontinuities in an image. DFTrecognizes that the rapid change in the image is like sudden changes in the frequencies, low-frequency to high-frequency in the images. +e aim of image sharpening is to highlight the key features, identifying the minor details, and sharpen the edges while the previous methods are not so effective. To validate the effectiveness the proposed method, the fusion is performed by a couple of advanced techniques such as stationary wavelet transform (SWT) and discrete wavelet transform (DWT) with both types of images like grayscale and color image. +e experiments are performed on nonmedical and medical (breast medical CTand MRI images) datasets. +e experimental results demonstrate that the proposed method outperforms all evaluated qualitative and quantitative metrics. Quantitative assessment is performed by eight well-known metrics, and every metric described its own feature by which it is easily assumed that the proposed method is superior. +e experimental results of the proposed technique SWT (LF + DFT) are summarized for evaluation matrices such as RMSE (5.6761), PFE (3.4378), MAE (0.4010), entropy (9.0121), SNR (26.8609), PSNR (40.1349), CC (0.9978), and ERGAS (2.2589) using clock dataset. image (one image) with more information, where all the 1. Introduction parts of the image are entirely focused. +e practical In the field of image fusion, the subfield multifocus image technique of multifocus image fusion should need to ac- fusion is one of the most significant and valuable ap- complish the requirements that all the information of the proaches to handle the problem of defocusing that some focused regions in the source images is preserved in the parts of the image are not in focus and blurred due to the resultant image [1]. Due to this, the resulting image is well- limited depth of focus in the optical lens of traditional informative and complete. Multifocus image fusion is cameras or large aperture and microscopes cameras. In applicable in a wide range of applications such as envi- multifocus image fusion, various images of a similar scene ronmental monitoring, image analysis [2], military tech- but with different focus settings can be merged into a signal nology, medical imaging [3], remote sensing, hyperspectral 2 Journal of Healthcare Engineering Section 5 gives the experimental results and discussion, and image analysis [4], computer vision, object recognition [5], and image deblurring [6]. the paper is concluded in Section 6. In the multifocus image, fusion has been introduced as a large number of techniques over the past couple of decades; 2. Literature Study some of them are very popular methods and achieve high accuracy, such as stationary wavelet transform (SWT) [7], Multifocus image fusion is one of the most significant areas discrete wavelet transform (DWT), dual-tree complex of image processing, and a lot of advanced techniques have wavelet transform (DT-CWT), and discrete cosine trans- been proposed in a couple of decades. Several works have form (DCT) [2]. Most multifocus image fusion techniques been carried out in the spatial domain. Principal com- are divided into four major classes [1, 8]. +e first category is ponent analysis (PCA) is the most frequently used method multiscale decomposition or frequency domain techniques and is specially designed to generate visible results such as wavelet transformation [8, 9], complex wavelet regarded as sharp edges and highly preserved spatial transformation [1, 10], nonsubsampled contourlet trans- characteristics [21]. +e intensity hue saturation (IHS) form [11], DWT [2], and SWT [12]. +e second category is technique effectively transforms the image from red, green, sparse representation techniques like an adaptive SR model and blue (RGB) domain into spatial (I) and spectral (H, S) proposed in [13] for simultaneous image fusion and information [22]. +e PCA and IHS have one significant denoising and multitask sparse representation technique advantage: both can use an arbitrary number of channels [14]. +e third category of techniques is based on compu- [23]. Brovey technique is mathematical formulas of the tational photography, such as light-field rendering [15]. +is Brovey transform (BT), introduced by American scientist kind of technique finds more of the physical formation of Bob Brovey. BT is different sources that capture a simple multifocus images and reconstructs the all-in-focus images. technique for merging the information. Brovey is also +e last category of techniques performed in the spatial called the color normalization transform (CNT) because it domain, which can make full use of the spatial context and involves a red, green, blue (RGB) color transform ap- provide spatial consistency or spatial domain, includes av- proach [24]. Average and maximum/minimum selection is eraging [2, 16], minimum [2,17], intensity hue saturation also spatial-domain method [25]. Many spatial-domain (IHS) [2,18], principal component analysis (PCA) [2,19], methods are complicated and time-consuming, and these and Gram–Schmidt [20] techniques. techniques produce poor results because they usually In this paper, a new concept has been proposed in the produce spectral distortions in the fused images, and the image fusion environment for multifocus image fusion. +e produced image is of low contrast, which contains less key contributions of this work are summarized as follows: information comparatively. Image fusion is also based on frequency domain tech- (i) +e new concept is that image enhancement or niques such as discrete cosine transform (DCT), the fre- image sharpening techniques are used before image quency information (pixels) is very effective in obtaining the fusion; in other words, the preprocessed step is details and outlines of an image, and DCT is the proper performed before applying image fusion techniques. working mechanism with frequencies. It provides a fast and (ii) +e preprocessed step is beneficial before the image noncomplex solution because it uses only cosine compo- fusion because the sharpening methods are helpful nents for the transformation. +e IDCT reconstructs the for recognizing the meaningful discontinuities in an original pixel values from the frequencies acquired from image, i.e., edges information or edges detection. DCT [26]. +e discrete cosine harmonic wavelet transform (DC-HWT) is the advanced version of DCT. In DC-HWT, (iii) All the standard techniques of image fusion have the signal is decomposed by grouping the DCT coefficients directly fused the images and generated the resul- similarly to DFT coefficients except for the conjugate op- tant image. In this work, first, the source images erations in laying the coefficients symmetrical (accurate as were enhanced, using the proposed hybrid en- DCT). hancement method such as LF + DFT (Laplacian Further, symmetric placement is also not significant filter + discrete Fourier transform) and other pop- due to the definition of DCT [27]. +ese groups’ inverse ular enhancement methods (Laplacian filter (LF) DCT (IDCT) results in discrete cosine harmonic wavelet and unsharp masking (UM)). coefficients (DC-HWCs). +e DCTof these processed sub- (iv) Second, the enhanced images were fused by popular bands (DC-HWCs) results in sub-band DCT coefficients, fusion methods such as DWT, SWT, and generated which are repositioned in their corresponding positions to more informative and meaningful resultant images retrieve the overall DCT spectrum at the original sampling as demonstrated in Figure 1. +e performance of the rate. Details of DC-HWT are provided in reference [28]. novel proposed method is outperformed as com- +e dual-tree complex wavelet transform (DT-CWT) is pared with the state-of-art methods. based on a couple of parallel trees, the first one represents +e rest of the paper is organized as follows. Section 2 the odd samples, and the second one represents the actual briefly describes the related work of multifocus image fusion. samples generated at the first level. +e parallel trees Section 3 describes the proposed methodology, such as render the signal delays necessary for each level and, Laplacian filter + discrete Fourier transform with DWT and therefore, eradicate aliasing effects and attain shift-in- SWT. Section 4 shortly describes the performance measures. variance [29]. Discrete wavelet transform (DWT) is the Journal of Healthcare Engineering 3 Laplacian Filter (LF) The Laplacian filter of an image An “unsharp mask” is a simple highlights an area of rapid intensity sharpen image operator, contrary to change. Hence, the LF is using for what its name might lead you to the edge-sharpening. So, sharpen believe. But actually, the name is the source image using LF. derived from the fact that it Unsharp Masking sharpens edges through a process that deducts an unsharp version of a picture from the reference picture and detects the presence of edges, The hybrid sharpening technique make the unsharp mask (effective a LF+DFT (LF + DFT) is proposed in this high-pass filter study for Multi-focus image fusion. The hybrid approach is the merger of the advantages of LF and DFT methods. The LF is used to recognize the meaningful Discrete Wavelet Transform SWT, DWT (DWT) is the mathematical tool discontinuities in an image, i.e. introduced in the 1980s, and it is an edges information or edges instrumental technique for image detection.The rapid change in the fusion in the wavelet transformation image is like sudden changes in the frequencies, low-frequency to high- process.The Stationary Wavelet Transform (SWT) is a wavelet frequency. The DFT is a common transform developed to overcome approach used to compute the the deficiency of translation frequency information as discrete. invariance of the DWT. Figure 1: Systematic approach for multifocus image fusion. mathematical tool introduced in the 1980s, and it is an 3. Proposed Methodology instrumental technique for image fusion in the wavelet In this article, the novel idea is proposed, which is the first time transformation process [1] but with the following draw- involved in multifocus image fusion to increase the accuracy (the backs: it retains the vertical and horizontal features only, it visibility of objects). +e novel concept is preprocessed evalu- is lack of shifting invariance, it suffers through ringing ation of images before fusion. +e fusion is performed by the artifacts and reduces the quality of the resultant fused two standard methods such as DWT and SWT to validate the image, it is lack of shifting dimensionality, and it is not proposed techniques. +e complete process is demonstrated in suitable for edge places due to missing edges during the Figure 3, and the proposed techniques are elaborated as follows. process. +e technique DWT is not a time-invariant transformation technique, which means that “with peri- odic signal extension, the DWT of a translated version of a 3.1. Laplacian Filter (LF). +e Laplacian filter of an image signal X is not, in general, the translated version of the highlights an area of rapid intensity change. Hence, the LF is DWT of X.” used for the edge-sharpening [27, 30, 32]. +is operator is +e stationary wavelet transform (SWT) is a wavelet exceptionally well at identifying the critical information in transform developed to overcome the deficiency of trans- an image. Any feature with sharp discontinuity will be lation invariance of the DWT. +e SWT is an entire shift- sharpening by an LF. Laplacian operator is also known as a invariant transform, which up-samples the filters by putting derivative operator, used to identify an image’s key features. zeros among the filter coefficients to overcome the down- +e critical difference between the Laplacian filter and other sampling step of the decimated approach [2]. +ey provide filters such as Prewitt, Roberts, Kirsch, Robinson, and Sobel improved time-frequency localization, and the design is [27, 33] is that all these filters use first-order derivative simple. Appropriate high-pass and low-pass filters have used masks, but LF is a second-order derivative mask. LF the data at each level, producing two sequences at the next sharpens the “Knee MRI medical image,” which demon- level. In the decimated approach, the filters are applied for strates the difference between source and LF sharpen images. the rows at first and then for the columns [7, 30]. +e SWT +e Laplacian equation is as follows: filter bank structure is given in Figure 2. 2 2 +e images are broken down into horizontal and vertical z G z G Δ I � + ⊗ I(x, y). (1) 􏼠 􏼡 approximations by employing column-wise and row-wise 2 2 zx zy low-pass and high-pass filters [31]. +e same filtration de- composes elements row-wise and column-wise to acquire vertical, horizontal, and diagonal approximation. +e low- pass and high-pass filters preserve the low and high fre- 3.2. Unsharp Mask (UM). An “unsharp mask” is a simple quencies and provide detailed information at respective sharpen image operator, contrary to what its name might frequencies. lead you to believe. However, actually, the name is derived 4 Journal of Healthcare Engineering COLUMNS H HH ROWS H 2 HL H 2 H 2 LH LL Input Images Figure 2: SWT filter bank structure. Unsharpened Sharpening Sharpened Fusion Fused Enhanced Images Techniques Images Images Image LF + SWT Laplacian Filter Left Side Enhanced Focus Image Image Unsharp SWT UM + SWT Masking Right Side Enhanced Focus Image Image LF+DFT (LF + DFT) + SWT Figure 3: +e abstract flow-chart of the proposed scheme. from the fact that it sharpens edges through a process that 3.3. LF + DFT Method. +e hybrid sharpening technique deducts an unsharp version of a picture from the reference (LF + DFT) is proposed in this study for multifocus image picture and detects the presence of edges, making the fusion. +e hybrid approach is the merger of the advantages unsharp mask (effective a high-pass filter) [19]. Sharpening of LF and DFT methods. +e LF is used to recognize the can demonstrate the texture and detail of the image. +is is meaningful discontinuities in an image, i.e., edges infor- probably the common type of sharpening and can be ex- mation or edges detection. In other words, LF is a derivative ecuted with nearly any image. +e unsharp mask cannot operator used to find the region of rapid change in the picture. +e rapid change in the image is like sudden changes add artifacts or additional detail in the image, but it can highly enhance the appearance by increasing small-scale in the frequencies, low-frequency to high-frequency [36]. acutance [33, 34] and making important details easier to +e DFT is a common approach used to compute the fre- identify. +e unsharp mask method is usually used in the quency information as discrete. +e frequency information photographic and printing industry applications for is considered an important way in the picture enhancement crispening edges. In sharpening images, the image size does [33, 37]. +erefore, to make a beneficial way of sharpening, not change, and it remains similar, but an unsharp mask the frequency information of Fourier transform is combined improves the sharpness of an image by increasing the with the second derivative masking of Laplacian filter in the acutance only. In the unsharp masking technique, the novel technique. Here is the involvement of spatial con- sharper image a(x, y) will be produced from the input version to the frequency and inverse (see equations (4) and image b(x, y) as (5)). So, this is the reason for calling that the cross-domain method. a(x, y) � b(x, y) + λc(x, y), (2) For a two-dimensional square image with N × N, the DFT equation is given as follows: where c (x, y) is the correction signal calculated as the output M−1 N−1 of a high-pass filter and λ is a positive scaling factor that −y2π(xm/N+yn/N) F(x, y) � 􏽘 􏽘 f(m, n)e , (3) controls the level of contrast sweetening achieved at the m�0 n�0 output [32,35]. Unsharp masking sharpens the “Knee MRI where f(m, n) is the spatial-domain image, and the expo- medical image,” demonstrating the difference between source, LF, and unsharp masking sharpen images. nential term is the basis operation representing every point Journal of Healthcare Engineering 5 F(x, y) in the Fourier space. +e formulation can be con- 5.2. Experimental Results and Discussion. In this section, the strued as follows: the value of every point F(x, y) is acquired experimentation is conducted on different multifocus image by multiplying the spatial image with the representing base sets for the proposed hybrid methods. +e proposed hybrid operation and summing the results. methods like DWT + LF, DWT + unsharp masking, DWT + +e primary operations are cosine and sine waves with (LF + DFT), SWT + LF, SWT + unsharp masking, and growing frequencies, i.e., F(0, 0) presents the DC compo- SWT + (LF + DFT) are compared with the traditional nents of the image which corresponds to the average methods such as average method (spatial-domain methods), brightness and F(N − 1, N − 1) presents the highest minimum method, DWT (frequency domain method), and frequency. SWT methods. +e algorithms are implemented, and the Similarly, the frequency domain image can be retrans- simulations are performed using the MATLAB 2016b ap- lated (inverse transform) to the spatial domain, shown in plication software tool. +e resultant images are evaluated in Figure 4. +e inverse frequency transform is as follows: two ways, i.e., quantitatively and qualitatively. For quanti- tative evaluation, eight well-known performance matrices, M−1 N−1 1 i.e., percentage fit error (PFE), entropy (E), correlation l2π(ka/N+lb/N) f(a, b) � 􏽘 􏽘 F(k, l)e . (4) coefficient (CORR), peak signal to noise ratio (PSNR), k�0 l�0 relative dimensionless global error (ERGAS), mean absolute error (MAE), signal to noise ratio (SNR), and root mean In the proposed technique, for a two-dimensional square square error (RMSE) are used to measure the performance of image with N × N resolution, the Laplacian equation (2) and resultant images of old and new methods. +e quantitative Fourier equation (4) are given: results of the new approaches are improved for the “Clocks,” M−1 N−1 “Books,” “Toys,” “Building and card,” and “Breast Medical 2 −l2π(km/N+ln/N) L(Δ) � 􏽘 􏽘 􏼐Δ I􏼑e . (5) (CT and MRI images)” image sets, as shown in Tables 2–6. a�0 b�0 All the performance metrics show better results for the proposed approaches on all image sets, which show the +e apparent sharpness of an image is increased, which capability of the new approaches in fusion environment. is the combination of two factors, i.e., resolution and RMSE indicates the difference between the true image acutance. Resolution is straightforward and not subjective, and the resultant image. +e smallest values show excellent which means the size of the image file in terms of the results. PFE is computing the norm of the difference among number of pixels. With all other factors remaining equal, the corresponding pixels of the true and resultant image to the higher the resolution of the image is—the more pixels it the norm of the true image. +e low values indicate superior has—the sharper it can be. Acutance, a measure of the results. MAE is the absolute error to calculate and validate contrast at an edge, is subjective and a little complicated the difference between resultant and reference images. Here, comparatively. +ere is no unit for acutance—you either MAE values are small for the proposed methods on both think an edge has contrast or think it does not. Edges that image sets, promising results. +e large value of entropy have more contrast appear to have a more defined edge to expresses the good results; hence, for the “Books” image set, the human visual system. LF + DFT sharpens the “Knee the DWT technique has a large value, while the “Clock” MRI medical image,” which demonstrates the difference image sets the proposed methods to demonstrate the im- between source, LF, unsharp masking, and sharpen images pressive results. +e CORR is a quantitative measure that in Figure 5. demonstrates the correlation between the true image and the resultant image. When the true and resultant images are 4. Performance Metrics similar, the value will be near to one. PSNR is specifically used for the measurement of spatial quality in the image. +e quantitative evaluation aims to identify the performance SNR is the performance measure used to find the ratio of the proposed methods and existing methods on various among information and noise of the resultant image. ERGAS measures, and every measure has its properties. Table 1 is used to calculate the quality of the resultant image in terms briefly describes the well-known statistical metrics. of normalization average error of each channel of the processed image. +e quantitative results of the proposed methods are well performed as compared with traditional 5. Experimentation methods. According to the results shown in Figures 6–10, 5.1. Datasets. In this letter, the experimentations are per- the SWT + (LF + DFT) method is superior among all pro- formed on four image sets; two are grayscale image sets posed methods. including “Clocks” and “Books,” and the other two are color +e qualitative analysis is a significant evaluation metric image sets such as “Toys” and “Building and card.” +e in multifocus image fusion. +e scientists performed fusion grayscale image sets are provided by authors, and the color on simple multifocus images. All the fusion methods are image sets are acquired from “Lytro multifocus datasets” directly employed to the multifocus images and improved [43]. +ese image sets are used for testing multifocus images the results. However, in this article, the new concept is for the experimental evaluation of novel techniques. +e size introduced as a preprocessing step before fusion. +is concept is firstly proposed in fusion environment. +e of the grayscale image sets (test images) is 512 × 512, and the size of the color image sets is 520 × 520 pixels. preprocessed step is involved in sharpening the images. 6 Journal of Healthcare Engineering DFT Multi-Focus Images Multi-Focus Sharpen Images Sharpen Techniques Image 1 Image 1 LF+DFT Sharpening Image 2 Image 2 Sharpen Images Images IDFT Figure 4: Framework of the proposed approach. (a) (b) (c) (d) Figure 5: +e sharpen results of “Knee MRI medical image”: (a) source image, (b) sharpen image by Laplacian filter, (c) sharpen image by unsharp masking, and (d) LF + DFT sharpen image. Table 1: Measurements to evaluate the experimental results. What Quality value to Description Formula Reference metrics look for best fusion +e RMSE is generally used to calculate the difference among the true image and resultant image by directly Lower 􏽱������������������������������� M N RMSE calculating the variations in RMSE � 1/MN 􏽐 􏽐 (I (a, b) − I (a, b)) (close to [38] a�1 b�1 z f pixel values. RMSE is highly zero) indicating the spectral quality of the resultant image. It is calculated as the norm of the difference among the Lower corresponding pixels of the PFE PFE � [norm(I − I )/norm(I ) + norm(I − I )/norm(I )] × 100 (equal to [2] z f z z f f true image and resultant zero) image to the norm of the true image. M N It gives the MAE of the MAE � 1/MN 􏽘 |I (a, b) − I (a, b)|+ z p Lower corresponding pixels in the a�1 b�1 MAE (equal to [2] M N true image and resultant zero) 1/MN 􏽘 􏽘 |I (a, b) − I (a, b)| x p image. a�1 b�1 Journal of Healthcare Engineering 7 Table 1: Continued. What Quality value to Description Formula Reference metrics look for best fusion Entropy (E) is a significant quantitative metric, which can be used to distinguish the Higher G−1 Entropy E � −􏽐 S log S [18] k k k�0 texture, appearance, or value information contents in the image. SNR is the performance measure used to find the Higher M N 2 M N 2 SNR SNR � 10 log (􏽐 􏽐 (I (a, b)) / 􏽐 􏽐 (I (a, b) − I (a, b)) ) [39] 10 a�1 b�1 z a�1 b�1 z p ratio among information and value noise of the resultant image. PSNR is one of the significant metrics and most commonly used in fusion. PSNR is specifically used for the measurement of spatial Higher M N 2 2 PSNR quality in the image. +e PSNR � 20 log[G /1/M × N 􏽐 􏽐 (I (a, b) − I (a, b)) ] [40] a�1 b�1 z p value computation is performed by the value of grey levels divided by the identical pixels in the true and the resultant images. +e CORR is a quantitative metric that demonstrates the Corr � 2C /C + C zp z P correlation among the true M N image and the resultant C � 􏽘 􏽘 I (a, b)∗ I (a, b) zp z p Higher a�1 image. When the true and b�1 M N value CC resultant images look the [30, 41] C � 􏽘 I (a, b) (close to z z same, the value will be near a�1 b�1 +1) to one. If the true and M N resultant images are C � 􏽘 􏽘 I (a, b) p p a�1 dissimilar, then the value will b�1 be near zero. ERGAS is used to calculate the quality of the resultant Lower image in terms of the n 1/2 2 2 ERGAS ERGAS � 100da/db[1/n 􏽐 (RMSE /mean )] (equal to [42] i�1 normalization average error zero) of each channel (band) of the processed image. Table 2: Statistical comparisons of multifocus image fusion on the “clocks image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 28.4166 23.8202 7.8278 1.9823 14.5830 35.5127 0.9144 5.7748 Minimum method 11.5217 10.5229 4.4813 4.8810 18.6569 37.5496 0.9942 4.3994 DWT 7.7077 7.0396 0.4880 7.8322 22.1487 39.2955 0.9976 2.9858 SWT 7.5158 6.8643 0.4835 8.3824 22.3677 39.4050 0.9975 2.9862 DWT (Laplacian) proposed 6.9276 3.8344 0.4174 8.6432 24.5875 39.5099 0.9979 2.9839 DWT (unsharp) proposed 7.5207 4.4390 0.4166 8.6343 24.6678 39.5500 0.9976 2.9624 DWT (LF + DFT) proposed 6.1766 3.5184 0.4107 9.0001 26.7923 40.1006 0.9980 2.2845 SWT (Laplacian) proposed 6.9978 3.9638 0.4110 8.8432 25.0712 39.7517 0.9978 2.8676 SWT (unsharp) proposed 6.9049 3.9811 0.4101 8.7321 25.1449 39.7886 0.9975 2.8731 SWT (LF + DFT) proposed 5.6761 3.4278 0.4010 9.0112 26.8609 40.1349 0.9978 2.2589 8 Journal of Healthcare Engineering Table 3: Statistical comparisons of multifocus image fusion on “books image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 26.2368 25.2586 10.6240 7.9872 11.2489 33.9757 0.9024 10.0925 Minimum method 14.4007 13.8638 4.7984 12.3321 16.4595 36.5810 0.9900 6.7961 DWT 10.9863 10.5767 0.1636 17.2384 18.8102 37.7563 0.9944 3.2366 SWT 10.9503 10.5421 0.1635 18.0932 18.8386 37.7705 0.9945 3.2408 DWT (Laplacian) proposed 10.0025 8.9378 0.1703 18.7548 18.8151 37.8540 0.9921 2.8606 DWT (unsharp) proposed 10.7051 8.6659 0.1707 18.7384 18.7955 37.8190 0.9926 2.7746 DWT (LF + DFT) proposed 9.1990 8.7186 0.1636 21.3843 18.8614 39.5801 0.9964 2.4083 SWT (Laplacian) proposed 10.4319 8.3976 0.1604 18.6342 18.8665 37.8797 0.9933 2.7049 SWT (unsharp) proposed 10.4895 8.1775 0.1708 18.7832 18.8558 37.7342 0.9936 2.6349 SWT (LF + DFT) proposed 9.0836 8.2106 0.1633 22.3221 18.9047 39.5318 0.9968 2.0744 Table 4: Statistical comparisons of multifocus image fusion on the “toys image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 34.2848 25.2737 19.4244 1.3784 10.7959 28.8138 0.9141 8.0940 Minimum method 19.0227 14.0230 8.5127 4.37283 15.3948 35.3721 0.9867 5.8235 DWT 12.7463 10.2732 2.0392 6.3726 20.8732 37.1203 0.9962 2.6384 SWT 12.6532 9.5487 1.1458 6.2843 20.4538 37.0410 0.9959 2.7072 DWT (Laplacian) proposed 12.6489 9.7323 1.1092 6.3743 21.8972 38.2932 0.9963 2.4832 DWT (unsharp) proposed 12.0283 9.9378 1.2872 6.4732 21.2342 38.0023 0.9962 2.6323 DWT (LF + DFT) proposed 12.0213 9.6384 0.9372 6.9983 23.2112 38.9923 0.9969 2.1234 SWT (Laplacian) proposed 12.4213 9.2197 0.9203 6.3283 21.8222 38.2166 0.9953 2.3003 SWT (unsharp) proposed 11.9650 9.8131 0.9288 6.3263 20.8144 37.3150 0.9959 2.9886 SWT (LF + DFT) proposed 11.5382 9.2123 0.8812 7.5932 23.3721 39.3872 0.9964 2.0232 Table 5: Statistical comparisons of multifocus image fusion on “building and card image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 31.3352 26.0768 18.9107 2.3554 6.0334 26.8074 0.9071 8.6514 Minimum method 17.6777 10.3879 5.4055 5.6654 14.3411 34.8047 0.9907 4.7054 DWT 12.3245 8.6483 0.0563 8.6445 18.4388 37.4885 0.9932 2.0012 SWT 11.3361 8.6096 0.0506 8.5664 18.8272 37.6201 0.9950 2.7695 DWT (Laplacian) proposed 10.9912 7.9874 0.0534 9.4743 20.1888 38.3732 0.9961 2.1021 DWT (unsharp) proposed 11.2323 7.9884 0.0532 9.8773 20.2981 38.1128 0.9961 2.1021 DWT (LF + DFT) proposed 9.8712 7.6653 0.0571 9.9933 21.2321 38.9901 0.9964 1.9221 SWT (Laplacian) proposed 10.4224 7.7726 0.0550 9.5543 9.5543 20.9489 0.9959 2.1117 SWT (unsharp) proposed 10.6771 8.3854 0.0524 9.5883 20.2085 38.2419 0.9967 2.2497 SWT (LF + DFT) proposed 8.7712 7.3623 0.0520 10.9877 1.9002 2 39.2872 0.9972 2.1023 Table 6: Statistical comparisons of multifocus image fusion on “medical images set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 33.0091 29.2135 14.6507 1.4345 8.6566 19.9864 0.9071 11.9876 Minimum method 19.4783 9.9898 6.4475 6.5432 14.3451 31.9047 0.9801 6.4365 DWT 11.4902 9.4325 2.4554 11.2144 21.4338 32.0985 0.9833 3.0766 SWT 10.9934 9.3212 1.3554 14.5434 22.8272 38.6287 0.9951 3.7695 DWT (Laplacian) proposed 10.0120 8.0546 1.4584 13.4532 25.1645 37.3632 0.9955 3.0021 DWT (unsharp) proposed 10.2221 7.8760 1.0543 12.2233 24.2531 36.1668 0.9943 2.1981 DWT (LF + DFT) proposed 8.1100 6.5432 1.0098 14.5435 28.4334 39.9881 0.9984 2.0001 SWT (Laplacian) proposed 10.4973 6.4924 1.0730 15.7644 28.5546 35.9549 0.9981 2.5414 SWT (unsharp) proposed 9.1203 7.3432 1.0845 15.5087 27.5432 44.4419 0.9967 2.3297 SWT (LF + DFT) proposed 7.1123 5.3332 1.0080 16.9438 33.4322 43.2542 0.9982 2.1221 Journal of Healthcare Engineering 9 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 6: +e fusion results of “clocks image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). 10 Journal of Healthcare Engineering (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 7: +e fusion results of “books image set”: (a) average fused, (b) minimum fused, (c) DWTfused, (d) SWTfused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 8: Continued. Journal of Healthcare Engineering 11 (d) (e) (f) (g) (h) (i) (j) Figure 8: +e fusion results of “toys image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 9: Continued. 12 Journal of Healthcare Engineering (d) (e) (f) (g) (h) (i) (j) Figure 9: +e fusion results of “building and card image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 10: Continued. Journal of Healthcare Engineering 13 (d) (e) (f) (g) (h) (i) (j) Figure 10: +e fusion results of “medical images”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) (d) Figure 11: Continued. 14 Journal of Healthcare Engineering (e) (f) (g) (h) Figure 11: +e sharpen results of “clocks image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 12: +e sharpen results of “books image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 13: +e sharpen results of “toys image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. Journal of Healthcare Engineering 15 (a) (b) (c) (d) (e) (f) (g) (h) Figure 14: +e sharpen results of “building and card image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 15: +e sharpen results of “medical images” (a, b) two CT and MRI medical images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DF\T sharpen images. 16 Journal of Healthcare Engineering [3] X. Li, F. Zhou, and J. Li, “Multi-focus image fusion based on +ree image sharpening techniques are used as a pre- the filtering techniques and block consistency verification,” in processed step like Laplacian filter, unsharp masking, and Proceedings of the 2018 IEEE 3rd International Conference on LF + DFT. From Figures 11–15, (a) and (b) both are source Image, Vision and Computing (ICIVC), pp. 453–457, IEEE, images, while (c) and (d) are sharpen images by Laplacian Chongqing, China, June 2018. filter, (e) and (f) are sharpen images by unsharp masking, [4] S. S. Khan, Q. Ran, M. Khan, and M. Zhang, “Hyperspectral and (g) and (h) are sharpen images by LF + DFT for image classification using nearest regularized subspace with “Clocks,” “Books,” “Toys,” and “Building and Cards” image Manhattan distance,” Journal of Applied Remote Sensing, sets, respectively. vol. 14, p. 3, Article ID 032604, 2019. [5] G. Kaur and P. Kaur, “Survey on multifocus image fusion techniques,” in Proceedings of the 2016 International Con- 6. Conclusions ference on Electrical, Electronics, and Optimization Techniques In this paper, we are mainly trying to solve the problem of (ICEEOT), March 2016. the out-of-focus blur part of an image. To achieve this goal, [6] R. Nandhini Abirami, P. M. Durai Raj Vincent, K. Srinivasan, U. Tariq, and C.-Y. Chang, “Deep CNN and deep gan in we introduced a new concept of sharpening the edges or computational visual perception-driven image analysis,” enhancing the image before fusing the multifocus source Complexity, vol. 2021, Article ID 5541134, 30 pages, 2021. images. Laplacian filter does the preprocessing step (sharpen [7] Y. Xu, S. E. Smith, S. Grunwald, A. Abd-Elrahman, and the edges), unsharp masking, and newly proposed Laplacian S. P. Wani, “Effects of image pansharpening on soil total filter + discrete Fourier transform (LF + DFT) sharpen nitrogen prediction models in South India,” Geoderma, method. +e sharpening concept is firstly proposed in a vol. 320, pp. 52–66, 2018. fusion environment, and the experimental results demon- [8] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image strate the superiority of the new concept. After sharpening fusion: a survey of the state of the art,” Information Fusion, the images, fusion is performed by stationary wavelet vol. 33, pp. 100–112, 2017. transform (SWT) and discrete wavelet transform (DWT) [9] G. Pajares and J. Manuel de la Cruz, “A wavelet-based image techniques. +e experiments are conducted on color and fusion tutorial,” Pattern Recognition, vol. 37, no. 9, grayscale datasets to validate the effectualness of the pro- pp. 1855–1872, 2004. posed technique. Four datasets “Clock,” “Book,” “Toy,” [10] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with “Building and Card,” and “Breast Medical CT and MRI complex wavelets,” Information Fusion, vol. 8, no. 2, images” are used for experimentation +e proposed tech- pp. 119–130, 2007. nique is evaluated visually and statistically, and for statistical [11] L. Tang, F. Zhao, and Z.-G. Zhao, “+e non subsampled assessment, we used eight well-known metrics such as contourlet transform for image fusion,” in Proceedings of the percentage fit error, entropy, correlation coefficient, peak 2007 International Conference on Wavelet Analysis and signal to noise ratio, relative dimensionless global error, Pattern Recognition, vol. 1, November 2007. mean absolute error, signal to noise ratio, and root mean [12] S. S. Khan, M. khan, and Q. Ran, “Multi-focus color image square error which indicates that the new method out- fusion using laplacian filter and discrete fourier transfor- performed among all state-of-the-art methods. In this work, mation with qualitative error image metrics,” in Proceedings of one major future challenge is that the proposed scheme is the 2nd International Conference on Control and Computer not time efficient because of the preprocessed step before Vision, Jeju Island, South Korea , June 2019. image fusion compared with simple fusion methods. [13] Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Processing, vol. 9, no. 5, pp. 347–357, 2015. Data Availability [14] Q. Zhang and M. D. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” +e datasets used in this research are taken from UCI ML IEEE Transactions on Image Processing, vol. 25, no. 5, Learning Repository available at https://archive.ics.uci.edu/. pp. 2045–2058, 2016. [15] K. Kodama and A. Kubota, “Efficient reconstruction of all-in- Conflicts of Interest focus images through shifted pinholes from multi-focus images for dense light field synthesis and rendering,” IEEE +e authors declare that there are no conflicts of interest Transactions on Image Processing, vol. 22, no. 11, pp. 4407– regarding the publication of this article. 4421, 2013. [16] K. Liang, L. Zhang, K. Zhang, and J. Sun, “Qilong Han, and Zilong Jin. A multi-focus image fusion method via region References mosaicking on laplacian pyramids,” PLoS One, vol. 13, no. 5, [1] Y. Chen, J. Guan, and W. K. Cham, “Robust multi-focus [17] I. Sri Wahyuni, Multi-focus image fusion using local vari- image fusion using edge model and multi-matting,” IEEE ability, PhD thesis, University of Burgundy, Dijon, France, Transactions on Image Processing, vol. 27, no. 3, pp. 1526– 1541, 2017. [18] U. Javed, M. M. Riaz, A Ghafoor, S. Sohaib Ali, S. S. Ali, and [2] S. Shah Khan, M. Khan, and Y. Alharbi, “Multi focus image fusion using image enhancement techniques with wavelet T. A Cheema, “Mri and pet image fusion using fuzzy logic and image local features,” De Scientific World Journal, vol. 2014, transformation,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 5, 2020. Article ID 708075, 8 pages, 2014. Journal of Healthcare Engineering 17 [19] S. S. Khan, Q. Ran, M. Khan, and Z. Ji, “Pan-sharpening [35] A. Polesel, G. Ramponi, and V. J. Mathews, “Image en- framework based on laplacian sharpening with Brovey,” in hancement via adaptive unsharp masking,” IEEE Transactions on Image Processing, vol. 9, no. 3, pp. 505–510, 2000. Proceedings of the 2019 IEEE International Conference on [36] L. Li, H. Ma, Z. Jia, and Y. Si, “A novel multiscale transform Signal, Information and Data Processing (ICSIDP), December decomposition based multi-focus image fusion framework,” Multimedia Tools and Applications, vol. 80, no. 8, [20] V. Yilmaz, C. Serifoglu Yilmaz, O. Gung ¨ or, ¨ and J. Shan, “A pp. 12389–12409, 2021. genetic algorithm solution to the gram-schmidt image fu- [37] N. Beaudoin and S. S. Beauchemin, “An accurate discrete sion,” International Journal of Remote Sensing, vol. 41, no. 4, Fourier transform for image processing,” Object Recognition pp. 1458–1485, 2020. Supported By User Interaction For Service Robots, IEEE, vol. 3, [21] B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution Pansharpening through multivariate regression [38] L. F. Zoran, “Quality evaluation of multiresolution remote of MS $+$Pan data,” IEEE Transactions on Geoscience and sensing image fusion,” U.P.B. Scientific Bulletin Series C, Remote Sensing, vol. 45, no. 10, pp. 3230–3239, 2007. vol. 71, pp. 38–52, 2009. [22] V. Vijayaraj, “A quantitative analysis of pansharpened im- [39] I. Yuhendra, I. Alimuddin, J. T. S. Sumantyo, and H. Kuze, ages,” +esis, Faculty of Mississippi State University, Stark- “Assessment of pan-sharpening methods applied to image ville, MI, USA, 2004. fusion of remotely sensed multi-band data,” International [23] J. Zhou, D. L. Civco, and J. A. Silander, “A wavelet transform Journal of Applied Earth Observation and Geoinformation, method to merge Landsat TM and SPOT panchromatic data,” vol. 18, pp. 165–175, 2012. International Journal of Remote Sensing, vol. 19, no. 4, [40] R. Gharbia, A. E. Hassanien, A. H. El-Baz, M. Elhoseny, and pp. 743–757, 1998. M. Gunasekaran, “Multi-spectral and panchromatic image [24] A. Siddique, B. Xiao, W. Li, Q. Nawaz, and I. Hamid, “Multi- fusion approach using stationary wavelet transform and focus image fusion using block-wise color-principal com- swarm flower pollination optimization for remote sensing ponent analysis,” in Proceedings of the 2018 IEEE 3rd Inter- applications,” Future Generation Computer Systems, vol. 88, national Conference on Image, Vision and Computing pp. 501–511, 2018. (ICIVC). IEEE, Chongqing, China, June 2018. [41] X. X. Zhu and R. Bamler, “A sparse image fusion algorithm [25] S. S. Khan, “Research on image classification and fusion based with application to pan-sharpening,” IEEE Transactions on on machine learning techniques,” Master thesis, Beijing Geoscience and Remote Sensing, vol. 51, no. 5, pp. 2827–2836, University of Chemical Technology, Beijing, China, 2020. [26] Y. Yang, W. Wan, S. Huang, P. Lin, and Y. Que, “A novel pan- [42] Q. Du, N. H. Younan, R. King, and V. P. Shah, “On the sharpening framework based on matting model and multi- performance evaluation of pan-sharpening techniques,” IEEE scale transform,” Remote Sensing, vol. 9, no. 4, p. 391, 2017. Geoscience and Remote Sensing Letters, vol. 4, no. 4, [27] W. S. Mokrzycki and M. A. Samko, “Gradient based method pp. 518–522, 2007. of color edges finding,” in Book: Image Processing \\& [43] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image Communications Challenges, pp. 429–438, Exit, New Jersery, fusion using dictionary-based sparse representation,” Infor- NJ, USA, 2009. mation Fusion, vol. 25, pp. 72–84, 2015. [28] B. K. Shreyamsha Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal, Image and Video Pro- cessing, vol. 7, no. 6, pp. 1125–1143, 2013. [29] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 9, pp. 1124–1137, 2004. [30] P. Singh, M. Diwakar, X. Cheng, and A. Shankar, “A new wavelet-based multi-focus image fusion technique using method noise and anisotropic diffusion for real-time sur- veillance application,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1051–1068, 2021. [31] H.-M. Hua-Mei Chen, S. Seungsin Lee, R. M. Rao, M.-A. Slamani, and P. K. Varshney, “Imaging for concealed weapon detection: a tutorial overview of development in imaging sensors and processing,” IEEE Signal Processing Magazine, vol. 22, no. 2, pp. 52–61, 2005. [32] S. S. Khan, Q. Ran, and M. Khan, “Image pan-sharpening using enhancement based approaches in remote sensing,” Multimedia Tools and Applications, vol. 79, no. 43, pp. 32791–32805, 2020. [33] H. T. Mustafa, J. Yang, and M. Zareapoor, “Multi-scale convolutional neural network for multi-focus image fusion,” Image and Vision Computing, vol. 85, pp. 26–35, 2019. [34] M. Trentacoste, R. Mantiuk, W. Heidrich, and F. Dufrot, “Unsharp masking, countershading and halos: enhancements or artifacts?” Computer Graphics Forum, vol. 31, no. 2, 2012. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Healthcare Engineering Hindawi Publishing Corporation

Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and Nonmedical Images

Loading next page...
 
/lp/hindawi-publishing-corporation/hybrid-sharpening-transformation-approach-for-multifocus-image-fusion-HWCdCBfgUp

References (48)

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2021 Sarwar Shah Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
2040-2295
eISSN
2040-2309
DOI
10.1155/2021/7000991
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Healthcare Engineering Volume 2021, Article ID 7000991, 17 pages https://doi.org/10.1155/2021/7000991 Research Article Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and Nonmedical Images 1,2 2 3 4 Sarwar Shah Khan , Muzammil Khan , Yasser Alharbi , Usman Haider, 2 5 Kifayat Ullah, and Shahab Haider Department of Software Engineering, University of Sialkot, Sialkot 51310, Pakistan Department of Computer & Software Technology, University of Swat, Swat 19130, Pakistan College of Computer Science & Engineering, University of Hail, Ha’il, Saudi Arabia Ghulam Ishaq Khan Institute of Engineering Science and Technology, Topi Swabi, Pakistan Department of Computer Science, City University of Science and IT, Peshawar, Pakistan Correspondence should be addressed to Muzammil Khan; muzammilkhan86@gmail.com Received 2 May 2021; Accepted 18 October 2021; Published 11 December 2021 Academic Editor: Mian Muhammand Sadiq Fareed Copyright © 2021 Sarwar Shah Khan et al. +is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this study, we introduced a preprocessing novel transformation approach for multifocus image fusion. In the multifocus image, fusion has generated a high informative image by merging two source images with different areas or objects in focus. Acutely the preprocessing means sharpening performed on the images before applying fusion techniques. In this paper, along with the novel concept, a new sharpening technique, Laplacian filter + discrete Fourier transform (LF + DFT), is also proposed. +e LF is used to recognize the meaningful discontinuities in an image. DFTrecognizes that the rapid change in the image is like sudden changes in the frequencies, low-frequency to high-frequency in the images. +e aim of image sharpening is to highlight the key features, identifying the minor details, and sharpen the edges while the previous methods are not so effective. To validate the effectiveness the proposed method, the fusion is performed by a couple of advanced techniques such as stationary wavelet transform (SWT) and discrete wavelet transform (DWT) with both types of images like grayscale and color image. +e experiments are performed on nonmedical and medical (breast medical CTand MRI images) datasets. +e experimental results demonstrate that the proposed method outperforms all evaluated qualitative and quantitative metrics. Quantitative assessment is performed by eight well-known metrics, and every metric described its own feature by which it is easily assumed that the proposed method is superior. +e experimental results of the proposed technique SWT (LF + DFT) are summarized for evaluation matrices such as RMSE (5.6761), PFE (3.4378), MAE (0.4010), entropy (9.0121), SNR (26.8609), PSNR (40.1349), CC (0.9978), and ERGAS (2.2589) using clock dataset. image (one image) with more information, where all the 1. Introduction parts of the image are entirely focused. +e practical In the field of image fusion, the subfield multifocus image technique of multifocus image fusion should need to ac- fusion is one of the most significant and valuable ap- complish the requirements that all the information of the proaches to handle the problem of defocusing that some focused regions in the source images is preserved in the parts of the image are not in focus and blurred due to the resultant image [1]. Due to this, the resulting image is well- limited depth of focus in the optical lens of traditional informative and complete. Multifocus image fusion is cameras or large aperture and microscopes cameras. In applicable in a wide range of applications such as envi- multifocus image fusion, various images of a similar scene ronmental monitoring, image analysis [2], military tech- but with different focus settings can be merged into a signal nology, medical imaging [3], remote sensing, hyperspectral 2 Journal of Healthcare Engineering Section 5 gives the experimental results and discussion, and image analysis [4], computer vision, object recognition [5], and image deblurring [6]. the paper is concluded in Section 6. In the multifocus image, fusion has been introduced as a large number of techniques over the past couple of decades; 2. Literature Study some of them are very popular methods and achieve high accuracy, such as stationary wavelet transform (SWT) [7], Multifocus image fusion is one of the most significant areas discrete wavelet transform (DWT), dual-tree complex of image processing, and a lot of advanced techniques have wavelet transform (DT-CWT), and discrete cosine trans- been proposed in a couple of decades. Several works have form (DCT) [2]. Most multifocus image fusion techniques been carried out in the spatial domain. Principal com- are divided into four major classes [1, 8]. +e first category is ponent analysis (PCA) is the most frequently used method multiscale decomposition or frequency domain techniques and is specially designed to generate visible results such as wavelet transformation [8, 9], complex wavelet regarded as sharp edges and highly preserved spatial transformation [1, 10], nonsubsampled contourlet trans- characteristics [21]. +e intensity hue saturation (IHS) form [11], DWT [2], and SWT [12]. +e second category is technique effectively transforms the image from red, green, sparse representation techniques like an adaptive SR model and blue (RGB) domain into spatial (I) and spectral (H, S) proposed in [13] for simultaneous image fusion and information [22]. +e PCA and IHS have one significant denoising and multitask sparse representation technique advantage: both can use an arbitrary number of channels [14]. +e third category of techniques is based on compu- [23]. Brovey technique is mathematical formulas of the tational photography, such as light-field rendering [15]. +is Brovey transform (BT), introduced by American scientist kind of technique finds more of the physical formation of Bob Brovey. BT is different sources that capture a simple multifocus images and reconstructs the all-in-focus images. technique for merging the information. Brovey is also +e last category of techniques performed in the spatial called the color normalization transform (CNT) because it domain, which can make full use of the spatial context and involves a red, green, blue (RGB) color transform ap- provide spatial consistency or spatial domain, includes av- proach [24]. Average and maximum/minimum selection is eraging [2, 16], minimum [2,17], intensity hue saturation also spatial-domain method [25]. Many spatial-domain (IHS) [2,18], principal component analysis (PCA) [2,19], methods are complicated and time-consuming, and these and Gram–Schmidt [20] techniques. techniques produce poor results because they usually In this paper, a new concept has been proposed in the produce spectral distortions in the fused images, and the image fusion environment for multifocus image fusion. +e produced image is of low contrast, which contains less key contributions of this work are summarized as follows: information comparatively. Image fusion is also based on frequency domain tech- (i) +e new concept is that image enhancement or niques such as discrete cosine transform (DCT), the fre- image sharpening techniques are used before image quency information (pixels) is very effective in obtaining the fusion; in other words, the preprocessed step is details and outlines of an image, and DCT is the proper performed before applying image fusion techniques. working mechanism with frequencies. It provides a fast and (ii) +e preprocessed step is beneficial before the image noncomplex solution because it uses only cosine compo- fusion because the sharpening methods are helpful nents for the transformation. +e IDCT reconstructs the for recognizing the meaningful discontinuities in an original pixel values from the frequencies acquired from image, i.e., edges information or edges detection. DCT [26]. +e discrete cosine harmonic wavelet transform (DC-HWT) is the advanced version of DCT. In DC-HWT, (iii) All the standard techniques of image fusion have the signal is decomposed by grouping the DCT coefficients directly fused the images and generated the resul- similarly to DFT coefficients except for the conjugate op- tant image. In this work, first, the source images erations in laying the coefficients symmetrical (accurate as were enhanced, using the proposed hybrid en- DCT). hancement method such as LF + DFT (Laplacian Further, symmetric placement is also not significant filter + discrete Fourier transform) and other pop- due to the definition of DCT [27]. +ese groups’ inverse ular enhancement methods (Laplacian filter (LF) DCT (IDCT) results in discrete cosine harmonic wavelet and unsharp masking (UM)). coefficients (DC-HWCs). +e DCTof these processed sub- (iv) Second, the enhanced images were fused by popular bands (DC-HWCs) results in sub-band DCT coefficients, fusion methods such as DWT, SWT, and generated which are repositioned in their corresponding positions to more informative and meaningful resultant images retrieve the overall DCT spectrum at the original sampling as demonstrated in Figure 1. +e performance of the rate. Details of DC-HWT are provided in reference [28]. novel proposed method is outperformed as com- +e dual-tree complex wavelet transform (DT-CWT) is pared with the state-of-art methods. based on a couple of parallel trees, the first one represents +e rest of the paper is organized as follows. Section 2 the odd samples, and the second one represents the actual briefly describes the related work of multifocus image fusion. samples generated at the first level. +e parallel trees Section 3 describes the proposed methodology, such as render the signal delays necessary for each level and, Laplacian filter + discrete Fourier transform with DWT and therefore, eradicate aliasing effects and attain shift-in- SWT. Section 4 shortly describes the performance measures. variance [29]. Discrete wavelet transform (DWT) is the Journal of Healthcare Engineering 3 Laplacian Filter (LF) The Laplacian filter of an image An “unsharp mask” is a simple highlights an area of rapid intensity sharpen image operator, contrary to change. Hence, the LF is using for what its name might lead you to the edge-sharpening. So, sharpen believe. But actually, the name is the source image using LF. derived from the fact that it Unsharp Masking sharpens edges through a process that deducts an unsharp version of a picture from the reference picture and detects the presence of edges, The hybrid sharpening technique make the unsharp mask (effective a LF+DFT (LF + DFT) is proposed in this high-pass filter study for Multi-focus image fusion. The hybrid approach is the merger of the advantages of LF and DFT methods. The LF is used to recognize the meaningful Discrete Wavelet Transform SWT, DWT (DWT) is the mathematical tool discontinuities in an image, i.e. introduced in the 1980s, and it is an edges information or edges instrumental technique for image detection.The rapid change in the fusion in the wavelet transformation image is like sudden changes in the frequencies, low-frequency to high- process.The Stationary Wavelet Transform (SWT) is a wavelet frequency. The DFT is a common transform developed to overcome approach used to compute the the deficiency of translation frequency information as discrete. invariance of the DWT. Figure 1: Systematic approach for multifocus image fusion. mathematical tool introduced in the 1980s, and it is an 3. Proposed Methodology instrumental technique for image fusion in the wavelet In this article, the novel idea is proposed, which is the first time transformation process [1] but with the following draw- involved in multifocus image fusion to increase the accuracy (the backs: it retains the vertical and horizontal features only, it visibility of objects). +e novel concept is preprocessed evalu- is lack of shifting invariance, it suffers through ringing ation of images before fusion. +e fusion is performed by the artifacts and reduces the quality of the resultant fused two standard methods such as DWT and SWT to validate the image, it is lack of shifting dimensionality, and it is not proposed techniques. +e complete process is demonstrated in suitable for edge places due to missing edges during the Figure 3, and the proposed techniques are elaborated as follows. process. +e technique DWT is not a time-invariant transformation technique, which means that “with peri- odic signal extension, the DWT of a translated version of a 3.1. Laplacian Filter (LF). +e Laplacian filter of an image signal X is not, in general, the translated version of the highlights an area of rapid intensity change. Hence, the LF is DWT of X.” used for the edge-sharpening [27, 30, 32]. +is operator is +e stationary wavelet transform (SWT) is a wavelet exceptionally well at identifying the critical information in transform developed to overcome the deficiency of trans- an image. Any feature with sharp discontinuity will be lation invariance of the DWT. +e SWT is an entire shift- sharpening by an LF. Laplacian operator is also known as a invariant transform, which up-samples the filters by putting derivative operator, used to identify an image’s key features. zeros among the filter coefficients to overcome the down- +e critical difference between the Laplacian filter and other sampling step of the decimated approach [2]. +ey provide filters such as Prewitt, Roberts, Kirsch, Robinson, and Sobel improved time-frequency localization, and the design is [27, 33] is that all these filters use first-order derivative simple. Appropriate high-pass and low-pass filters have used masks, but LF is a second-order derivative mask. LF the data at each level, producing two sequences at the next sharpens the “Knee MRI medical image,” which demon- level. In the decimated approach, the filters are applied for strates the difference between source and LF sharpen images. the rows at first and then for the columns [7, 30]. +e SWT +e Laplacian equation is as follows: filter bank structure is given in Figure 2. 2 2 +e images are broken down into horizontal and vertical z G z G Δ I � + ⊗ I(x, y). (1) 􏼠 􏼡 approximations by employing column-wise and row-wise 2 2 zx zy low-pass and high-pass filters [31]. +e same filtration de- composes elements row-wise and column-wise to acquire vertical, horizontal, and diagonal approximation. +e low- pass and high-pass filters preserve the low and high fre- 3.2. Unsharp Mask (UM). An “unsharp mask” is a simple quencies and provide detailed information at respective sharpen image operator, contrary to what its name might frequencies. lead you to believe. However, actually, the name is derived 4 Journal of Healthcare Engineering COLUMNS H HH ROWS H 2 HL H 2 H 2 LH LL Input Images Figure 2: SWT filter bank structure. Unsharpened Sharpening Sharpened Fusion Fused Enhanced Images Techniques Images Images Image LF + SWT Laplacian Filter Left Side Enhanced Focus Image Image Unsharp SWT UM + SWT Masking Right Side Enhanced Focus Image Image LF+DFT (LF + DFT) + SWT Figure 3: +e abstract flow-chart of the proposed scheme. from the fact that it sharpens edges through a process that 3.3. LF + DFT Method. +e hybrid sharpening technique deducts an unsharp version of a picture from the reference (LF + DFT) is proposed in this study for multifocus image picture and detects the presence of edges, making the fusion. +e hybrid approach is the merger of the advantages unsharp mask (effective a high-pass filter) [19]. Sharpening of LF and DFT methods. +e LF is used to recognize the can demonstrate the texture and detail of the image. +is is meaningful discontinuities in an image, i.e., edges infor- probably the common type of sharpening and can be ex- mation or edges detection. In other words, LF is a derivative ecuted with nearly any image. +e unsharp mask cannot operator used to find the region of rapid change in the picture. +e rapid change in the image is like sudden changes add artifacts or additional detail in the image, but it can highly enhance the appearance by increasing small-scale in the frequencies, low-frequency to high-frequency [36]. acutance [33, 34] and making important details easier to +e DFT is a common approach used to compute the fre- identify. +e unsharp mask method is usually used in the quency information as discrete. +e frequency information photographic and printing industry applications for is considered an important way in the picture enhancement crispening edges. In sharpening images, the image size does [33, 37]. +erefore, to make a beneficial way of sharpening, not change, and it remains similar, but an unsharp mask the frequency information of Fourier transform is combined improves the sharpness of an image by increasing the with the second derivative masking of Laplacian filter in the acutance only. In the unsharp masking technique, the novel technique. Here is the involvement of spatial con- sharper image a(x, y) will be produced from the input version to the frequency and inverse (see equations (4) and image b(x, y) as (5)). So, this is the reason for calling that the cross-domain method. a(x, y) � b(x, y) + λc(x, y), (2) For a two-dimensional square image with N × N, the DFT equation is given as follows: where c (x, y) is the correction signal calculated as the output M−1 N−1 of a high-pass filter and λ is a positive scaling factor that −y2π(xm/N+yn/N) F(x, y) � 􏽘 􏽘 f(m, n)e , (3) controls the level of contrast sweetening achieved at the m�0 n�0 output [32,35]. Unsharp masking sharpens the “Knee MRI where f(m, n) is the spatial-domain image, and the expo- medical image,” demonstrating the difference between source, LF, and unsharp masking sharpen images. nential term is the basis operation representing every point Journal of Healthcare Engineering 5 F(x, y) in the Fourier space. +e formulation can be con- 5.2. Experimental Results and Discussion. In this section, the strued as follows: the value of every point F(x, y) is acquired experimentation is conducted on different multifocus image by multiplying the spatial image with the representing base sets for the proposed hybrid methods. +e proposed hybrid operation and summing the results. methods like DWT + LF, DWT + unsharp masking, DWT + +e primary operations are cosine and sine waves with (LF + DFT), SWT + LF, SWT + unsharp masking, and growing frequencies, i.e., F(0, 0) presents the DC compo- SWT + (LF + DFT) are compared with the traditional nents of the image which corresponds to the average methods such as average method (spatial-domain methods), brightness and F(N − 1, N − 1) presents the highest minimum method, DWT (frequency domain method), and frequency. SWT methods. +e algorithms are implemented, and the Similarly, the frequency domain image can be retrans- simulations are performed using the MATLAB 2016b ap- lated (inverse transform) to the spatial domain, shown in plication software tool. +e resultant images are evaluated in Figure 4. +e inverse frequency transform is as follows: two ways, i.e., quantitatively and qualitatively. For quanti- tative evaluation, eight well-known performance matrices, M−1 N−1 1 i.e., percentage fit error (PFE), entropy (E), correlation l2π(ka/N+lb/N) f(a, b) � 􏽘 􏽘 F(k, l)e . (4) coefficient (CORR), peak signal to noise ratio (PSNR), k�0 l�0 relative dimensionless global error (ERGAS), mean absolute error (MAE), signal to noise ratio (SNR), and root mean In the proposed technique, for a two-dimensional square square error (RMSE) are used to measure the performance of image with N × N resolution, the Laplacian equation (2) and resultant images of old and new methods. +e quantitative Fourier equation (4) are given: results of the new approaches are improved for the “Clocks,” M−1 N−1 “Books,” “Toys,” “Building and card,” and “Breast Medical 2 −l2π(km/N+ln/N) L(Δ) � 􏽘 􏽘 􏼐Δ I􏼑e . (5) (CT and MRI images)” image sets, as shown in Tables 2–6. a�0 b�0 All the performance metrics show better results for the proposed approaches on all image sets, which show the +e apparent sharpness of an image is increased, which capability of the new approaches in fusion environment. is the combination of two factors, i.e., resolution and RMSE indicates the difference between the true image acutance. Resolution is straightforward and not subjective, and the resultant image. +e smallest values show excellent which means the size of the image file in terms of the results. PFE is computing the norm of the difference among number of pixels. With all other factors remaining equal, the corresponding pixels of the true and resultant image to the higher the resolution of the image is—the more pixels it the norm of the true image. +e low values indicate superior has—the sharper it can be. Acutance, a measure of the results. MAE is the absolute error to calculate and validate contrast at an edge, is subjective and a little complicated the difference between resultant and reference images. Here, comparatively. +ere is no unit for acutance—you either MAE values are small for the proposed methods on both think an edge has contrast or think it does not. Edges that image sets, promising results. +e large value of entropy have more contrast appear to have a more defined edge to expresses the good results; hence, for the “Books” image set, the human visual system. LF + DFT sharpens the “Knee the DWT technique has a large value, while the “Clock” MRI medical image,” which demonstrates the difference image sets the proposed methods to demonstrate the im- between source, LF, unsharp masking, and sharpen images pressive results. +e CORR is a quantitative measure that in Figure 5. demonstrates the correlation between the true image and the resultant image. When the true and resultant images are 4. Performance Metrics similar, the value will be near to one. PSNR is specifically used for the measurement of spatial quality in the image. +e quantitative evaluation aims to identify the performance SNR is the performance measure used to find the ratio of the proposed methods and existing methods on various among information and noise of the resultant image. ERGAS measures, and every measure has its properties. Table 1 is used to calculate the quality of the resultant image in terms briefly describes the well-known statistical metrics. of normalization average error of each channel of the processed image. +e quantitative results of the proposed methods are well performed as compared with traditional 5. Experimentation methods. According to the results shown in Figures 6–10, 5.1. Datasets. In this letter, the experimentations are per- the SWT + (LF + DFT) method is superior among all pro- formed on four image sets; two are grayscale image sets posed methods. including “Clocks” and “Books,” and the other two are color +e qualitative analysis is a significant evaluation metric image sets such as “Toys” and “Building and card.” +e in multifocus image fusion. +e scientists performed fusion grayscale image sets are provided by authors, and the color on simple multifocus images. All the fusion methods are image sets are acquired from “Lytro multifocus datasets” directly employed to the multifocus images and improved [43]. +ese image sets are used for testing multifocus images the results. However, in this article, the new concept is for the experimental evaluation of novel techniques. +e size introduced as a preprocessing step before fusion. +is concept is firstly proposed in fusion environment. +e of the grayscale image sets (test images) is 512 × 512, and the size of the color image sets is 520 × 520 pixels. preprocessed step is involved in sharpening the images. 6 Journal of Healthcare Engineering DFT Multi-Focus Images Multi-Focus Sharpen Images Sharpen Techniques Image 1 Image 1 LF+DFT Sharpening Image 2 Image 2 Sharpen Images Images IDFT Figure 4: Framework of the proposed approach. (a) (b) (c) (d) Figure 5: +e sharpen results of “Knee MRI medical image”: (a) source image, (b) sharpen image by Laplacian filter, (c) sharpen image by unsharp masking, and (d) LF + DFT sharpen image. Table 1: Measurements to evaluate the experimental results. What Quality value to Description Formula Reference metrics look for best fusion +e RMSE is generally used to calculate the difference among the true image and resultant image by directly Lower 􏽱������������������������������� M N RMSE calculating the variations in RMSE � 1/MN 􏽐 􏽐 (I (a, b) − I (a, b)) (close to [38] a�1 b�1 z f pixel values. RMSE is highly zero) indicating the spectral quality of the resultant image. It is calculated as the norm of the difference among the Lower corresponding pixels of the PFE PFE � [norm(I − I )/norm(I ) + norm(I − I )/norm(I )] × 100 (equal to [2] z f z z f f true image and resultant zero) image to the norm of the true image. M N It gives the MAE of the MAE � 1/MN 􏽘 |I (a, b) − I (a, b)|+ z p Lower corresponding pixels in the a�1 b�1 MAE (equal to [2] M N true image and resultant zero) 1/MN 􏽘 􏽘 |I (a, b) − I (a, b)| x p image. a�1 b�1 Journal of Healthcare Engineering 7 Table 1: Continued. What Quality value to Description Formula Reference metrics look for best fusion Entropy (E) is a significant quantitative metric, which can be used to distinguish the Higher G−1 Entropy E � −􏽐 S log S [18] k k k�0 texture, appearance, or value information contents in the image. SNR is the performance measure used to find the Higher M N 2 M N 2 SNR SNR � 10 log (􏽐 􏽐 (I (a, b)) / 􏽐 􏽐 (I (a, b) − I (a, b)) ) [39] 10 a�1 b�1 z a�1 b�1 z p ratio among information and value noise of the resultant image. PSNR is one of the significant metrics and most commonly used in fusion. PSNR is specifically used for the measurement of spatial Higher M N 2 2 PSNR quality in the image. +e PSNR � 20 log[G /1/M × N 􏽐 􏽐 (I (a, b) − I (a, b)) ] [40] a�1 b�1 z p value computation is performed by the value of grey levels divided by the identical pixels in the true and the resultant images. +e CORR is a quantitative metric that demonstrates the Corr � 2C /C + C zp z P correlation among the true M N image and the resultant C � 􏽘 􏽘 I (a, b)∗ I (a, b) zp z p Higher a�1 image. When the true and b�1 M N value CC resultant images look the [30, 41] C � 􏽘 I (a, b) (close to z z same, the value will be near a�1 b�1 +1) to one. If the true and M N resultant images are C � 􏽘 􏽘 I (a, b) p p a�1 dissimilar, then the value will b�1 be near zero. ERGAS is used to calculate the quality of the resultant Lower image in terms of the n 1/2 2 2 ERGAS ERGAS � 100da/db[1/n 􏽐 (RMSE /mean )] (equal to [42] i�1 normalization average error zero) of each channel (band) of the processed image. Table 2: Statistical comparisons of multifocus image fusion on the “clocks image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 28.4166 23.8202 7.8278 1.9823 14.5830 35.5127 0.9144 5.7748 Minimum method 11.5217 10.5229 4.4813 4.8810 18.6569 37.5496 0.9942 4.3994 DWT 7.7077 7.0396 0.4880 7.8322 22.1487 39.2955 0.9976 2.9858 SWT 7.5158 6.8643 0.4835 8.3824 22.3677 39.4050 0.9975 2.9862 DWT (Laplacian) proposed 6.9276 3.8344 0.4174 8.6432 24.5875 39.5099 0.9979 2.9839 DWT (unsharp) proposed 7.5207 4.4390 0.4166 8.6343 24.6678 39.5500 0.9976 2.9624 DWT (LF + DFT) proposed 6.1766 3.5184 0.4107 9.0001 26.7923 40.1006 0.9980 2.2845 SWT (Laplacian) proposed 6.9978 3.9638 0.4110 8.8432 25.0712 39.7517 0.9978 2.8676 SWT (unsharp) proposed 6.9049 3.9811 0.4101 8.7321 25.1449 39.7886 0.9975 2.8731 SWT (LF + DFT) proposed 5.6761 3.4278 0.4010 9.0112 26.8609 40.1349 0.9978 2.2589 8 Journal of Healthcare Engineering Table 3: Statistical comparisons of multifocus image fusion on “books image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 26.2368 25.2586 10.6240 7.9872 11.2489 33.9757 0.9024 10.0925 Minimum method 14.4007 13.8638 4.7984 12.3321 16.4595 36.5810 0.9900 6.7961 DWT 10.9863 10.5767 0.1636 17.2384 18.8102 37.7563 0.9944 3.2366 SWT 10.9503 10.5421 0.1635 18.0932 18.8386 37.7705 0.9945 3.2408 DWT (Laplacian) proposed 10.0025 8.9378 0.1703 18.7548 18.8151 37.8540 0.9921 2.8606 DWT (unsharp) proposed 10.7051 8.6659 0.1707 18.7384 18.7955 37.8190 0.9926 2.7746 DWT (LF + DFT) proposed 9.1990 8.7186 0.1636 21.3843 18.8614 39.5801 0.9964 2.4083 SWT (Laplacian) proposed 10.4319 8.3976 0.1604 18.6342 18.8665 37.8797 0.9933 2.7049 SWT (unsharp) proposed 10.4895 8.1775 0.1708 18.7832 18.8558 37.7342 0.9936 2.6349 SWT (LF + DFT) proposed 9.0836 8.2106 0.1633 22.3221 18.9047 39.5318 0.9968 2.0744 Table 4: Statistical comparisons of multifocus image fusion on the “toys image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 34.2848 25.2737 19.4244 1.3784 10.7959 28.8138 0.9141 8.0940 Minimum method 19.0227 14.0230 8.5127 4.37283 15.3948 35.3721 0.9867 5.8235 DWT 12.7463 10.2732 2.0392 6.3726 20.8732 37.1203 0.9962 2.6384 SWT 12.6532 9.5487 1.1458 6.2843 20.4538 37.0410 0.9959 2.7072 DWT (Laplacian) proposed 12.6489 9.7323 1.1092 6.3743 21.8972 38.2932 0.9963 2.4832 DWT (unsharp) proposed 12.0283 9.9378 1.2872 6.4732 21.2342 38.0023 0.9962 2.6323 DWT (LF + DFT) proposed 12.0213 9.6384 0.9372 6.9983 23.2112 38.9923 0.9969 2.1234 SWT (Laplacian) proposed 12.4213 9.2197 0.9203 6.3283 21.8222 38.2166 0.9953 2.3003 SWT (unsharp) proposed 11.9650 9.8131 0.9288 6.3263 20.8144 37.3150 0.9959 2.9886 SWT (LF + DFT) proposed 11.5382 9.2123 0.8812 7.5932 23.3721 39.3872 0.9964 2.0232 Table 5: Statistical comparisons of multifocus image fusion on “building and card image set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 31.3352 26.0768 18.9107 2.3554 6.0334 26.8074 0.9071 8.6514 Minimum method 17.6777 10.3879 5.4055 5.6654 14.3411 34.8047 0.9907 4.7054 DWT 12.3245 8.6483 0.0563 8.6445 18.4388 37.4885 0.9932 2.0012 SWT 11.3361 8.6096 0.0506 8.5664 18.8272 37.6201 0.9950 2.7695 DWT (Laplacian) proposed 10.9912 7.9874 0.0534 9.4743 20.1888 38.3732 0.9961 2.1021 DWT (unsharp) proposed 11.2323 7.9884 0.0532 9.8773 20.2981 38.1128 0.9961 2.1021 DWT (LF + DFT) proposed 9.8712 7.6653 0.0571 9.9933 21.2321 38.9901 0.9964 1.9221 SWT (Laplacian) proposed 10.4224 7.7726 0.0550 9.5543 9.5543 20.9489 0.9959 2.1117 SWT (unsharp) proposed 10.6771 8.3854 0.0524 9.5883 20.2085 38.2419 0.9967 2.2497 SWT (LF + DFT) proposed 8.7712 7.3623 0.0520 10.9877 1.9002 2 39.2872 0.9972 2.1023 Table 6: Statistical comparisons of multifocus image fusion on “medical images set.” Methods RMSE PFE MAE Entropy SNR PSNR CC ERGAS Average method 33.0091 29.2135 14.6507 1.4345 8.6566 19.9864 0.9071 11.9876 Minimum method 19.4783 9.9898 6.4475 6.5432 14.3451 31.9047 0.9801 6.4365 DWT 11.4902 9.4325 2.4554 11.2144 21.4338 32.0985 0.9833 3.0766 SWT 10.9934 9.3212 1.3554 14.5434 22.8272 38.6287 0.9951 3.7695 DWT (Laplacian) proposed 10.0120 8.0546 1.4584 13.4532 25.1645 37.3632 0.9955 3.0021 DWT (unsharp) proposed 10.2221 7.8760 1.0543 12.2233 24.2531 36.1668 0.9943 2.1981 DWT (LF + DFT) proposed 8.1100 6.5432 1.0098 14.5435 28.4334 39.9881 0.9984 2.0001 SWT (Laplacian) proposed 10.4973 6.4924 1.0730 15.7644 28.5546 35.9549 0.9981 2.5414 SWT (unsharp) proposed 9.1203 7.3432 1.0845 15.5087 27.5432 44.4419 0.9967 2.3297 SWT (LF + DFT) proposed 7.1123 5.3332 1.0080 16.9438 33.4322 43.2542 0.9982 2.1221 Journal of Healthcare Engineering 9 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 6: +e fusion results of “clocks image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). 10 Journal of Healthcare Engineering (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 7: +e fusion results of “books image set”: (a) average fused, (b) minimum fused, (c) DWTfused, (d) SWTfused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 8: Continued. Journal of Healthcare Engineering 11 (d) (e) (f) (g) (h) (i) (j) Figure 8: +e fusion results of “toys image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 9: Continued. 12 Journal of Healthcare Engineering (d) (e) (f) (g) (h) (i) (j) Figure 9: +e fusion results of “building and card image set”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) Figure 10: Continued. Journal of Healthcare Engineering 13 (d) (e) (f) (g) (h) (i) (j) Figure 10: +e fusion results of “medical images”: (a) average fused, (b) minimum fused, (c) DWT fused, (d) SWT fused, (e) DWT + LF fused, (f) DWT + UM, (g) DWT + (LF + DFT), (h) SWT + LF, (i) SWT + UM, and (j) SWT + (LF + DFT). (a) (b) (c) (d) Figure 11: Continued. 14 Journal of Healthcare Engineering (e) (f) (g) (h) Figure 11: +e sharpen results of “clocks image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 12: +e sharpen results of “books image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 13: +e sharpen results of “toys image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. Journal of Healthcare Engineering 15 (a) (b) (c) (d) (e) (f) (g) (h) Figure 14: +e sharpen results of “building and card image set”: (a, b) two source images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DFT sharpen images. (a) (b) (c) (d) (e) (f) (g) (h) Figure 15: +e sharpen results of “medical images” (a, b) two CT and MRI medical images, (c, d) sharpen images by Laplacian filter, (e, f) sharpened images by unsharp masking, and (g, h) LF + DF\T sharpen images. 16 Journal of Healthcare Engineering [3] X. Li, F. Zhou, and J. Li, “Multi-focus image fusion based on +ree image sharpening techniques are used as a pre- the filtering techniques and block consistency verification,” in processed step like Laplacian filter, unsharp masking, and Proceedings of the 2018 IEEE 3rd International Conference on LF + DFT. From Figures 11–15, (a) and (b) both are source Image, Vision and Computing (ICIVC), pp. 453–457, IEEE, images, while (c) and (d) are sharpen images by Laplacian Chongqing, China, June 2018. filter, (e) and (f) are sharpen images by unsharp masking, [4] S. S. Khan, Q. Ran, M. Khan, and M. Zhang, “Hyperspectral and (g) and (h) are sharpen images by LF + DFT for image classification using nearest regularized subspace with “Clocks,” “Books,” “Toys,” and “Building and Cards” image Manhattan distance,” Journal of Applied Remote Sensing, sets, respectively. vol. 14, p. 3, Article ID 032604, 2019. [5] G. Kaur and P. Kaur, “Survey on multifocus image fusion techniques,” in Proceedings of the 2016 International Con- 6. Conclusions ference on Electrical, Electronics, and Optimization Techniques In this paper, we are mainly trying to solve the problem of (ICEEOT), March 2016. the out-of-focus blur part of an image. To achieve this goal, [6] R. Nandhini Abirami, P. M. Durai Raj Vincent, K. Srinivasan, U. Tariq, and C.-Y. Chang, “Deep CNN and deep gan in we introduced a new concept of sharpening the edges or computational visual perception-driven image analysis,” enhancing the image before fusing the multifocus source Complexity, vol. 2021, Article ID 5541134, 30 pages, 2021. images. Laplacian filter does the preprocessing step (sharpen [7] Y. Xu, S. E. Smith, S. Grunwald, A. Abd-Elrahman, and the edges), unsharp masking, and newly proposed Laplacian S. P. Wani, “Effects of image pansharpening on soil total filter + discrete Fourier transform (LF + DFT) sharpen nitrogen prediction models in South India,” Geoderma, method. +e sharpening concept is firstly proposed in a vol. 320, pp. 52–66, 2018. fusion environment, and the experimental results demon- [8] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image strate the superiority of the new concept. After sharpening fusion: a survey of the state of the art,” Information Fusion, the images, fusion is performed by stationary wavelet vol. 33, pp. 100–112, 2017. transform (SWT) and discrete wavelet transform (DWT) [9] G. Pajares and J. Manuel de la Cruz, “A wavelet-based image techniques. +e experiments are conducted on color and fusion tutorial,” Pattern Recognition, vol. 37, no. 9, grayscale datasets to validate the effectualness of the pro- pp. 1855–1872, 2004. posed technique. Four datasets “Clock,” “Book,” “Toy,” [10] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with “Building and Card,” and “Breast Medical CT and MRI complex wavelets,” Information Fusion, vol. 8, no. 2, images” are used for experimentation +e proposed tech- pp. 119–130, 2007. nique is evaluated visually and statistically, and for statistical [11] L. Tang, F. Zhao, and Z.-G. Zhao, “+e non subsampled assessment, we used eight well-known metrics such as contourlet transform for image fusion,” in Proceedings of the percentage fit error, entropy, correlation coefficient, peak 2007 International Conference on Wavelet Analysis and signal to noise ratio, relative dimensionless global error, Pattern Recognition, vol. 1, November 2007. mean absolute error, signal to noise ratio, and root mean [12] S. S. Khan, M. khan, and Q. Ran, “Multi-focus color image square error which indicates that the new method out- fusion using laplacian filter and discrete fourier transfor- performed among all state-of-the-art methods. In this work, mation with qualitative error image metrics,” in Proceedings of one major future challenge is that the proposed scheme is the 2nd International Conference on Control and Computer not time efficient because of the preprocessed step before Vision, Jeju Island, South Korea , June 2019. image fusion compared with simple fusion methods. [13] Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Processing, vol. 9, no. 5, pp. 347–357, 2015. Data Availability [14] Q. Zhang and M. D. Levine, “Robust multi-focus image fusion using multi-task sparse representation and spatial context,” +e datasets used in this research are taken from UCI ML IEEE Transactions on Image Processing, vol. 25, no. 5, Learning Repository available at https://archive.ics.uci.edu/. pp. 2045–2058, 2016. [15] K. Kodama and A. Kubota, “Efficient reconstruction of all-in- Conflicts of Interest focus images through shifted pinholes from multi-focus images for dense light field synthesis and rendering,” IEEE +e authors declare that there are no conflicts of interest Transactions on Image Processing, vol. 22, no. 11, pp. 4407– regarding the publication of this article. 4421, 2013. [16] K. Liang, L. Zhang, K. Zhang, and J. Sun, “Qilong Han, and Zilong Jin. A multi-focus image fusion method via region References mosaicking on laplacian pyramids,” PLoS One, vol. 13, no. 5, [1] Y. Chen, J. Guan, and W. K. Cham, “Robust multi-focus [17] I. Sri Wahyuni, Multi-focus image fusion using local vari- image fusion using edge model and multi-matting,” IEEE ability, PhD thesis, University of Burgundy, Dijon, France, Transactions on Image Processing, vol. 27, no. 3, pp. 1526– 1541, 2017. [18] U. Javed, M. M. Riaz, A Ghafoor, S. Sohaib Ali, S. S. Ali, and [2] S. Shah Khan, M. Khan, and Y. Alharbi, “Multi focus image fusion using image enhancement techniques with wavelet T. A Cheema, “Mri and pet image fusion using fuzzy logic and image local features,” De Scientific World Journal, vol. 2014, transformation,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 5, 2020. Article ID 708075, 8 pages, 2014. Journal of Healthcare Engineering 17 [19] S. S. Khan, Q. Ran, M. Khan, and Z. Ji, “Pan-sharpening [35] A. Polesel, G. Ramponi, and V. J. Mathews, “Image en- framework based on laplacian sharpening with Brovey,” in hancement via adaptive unsharp masking,” IEEE Transactions on Image Processing, vol. 9, no. 3, pp. 505–510, 2000. Proceedings of the 2019 IEEE International Conference on [36] L. Li, H. Ma, Z. Jia, and Y. Si, “A novel multiscale transform Signal, Information and Data Processing (ICSIDP), December decomposition based multi-focus image fusion framework,” Multimedia Tools and Applications, vol. 80, no. 8, [20] V. Yilmaz, C. Serifoglu Yilmaz, O. Gung ¨ or, ¨ and J. Shan, “A pp. 12389–12409, 2021. genetic algorithm solution to the gram-schmidt image fu- [37] N. Beaudoin and S. S. Beauchemin, “An accurate discrete sion,” International Journal of Remote Sensing, vol. 41, no. 4, Fourier transform for image processing,” Object Recognition pp. 1458–1485, 2020. Supported By User Interaction For Service Robots, IEEE, vol. 3, [21] B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution Pansharpening through multivariate regression [38] L. F. Zoran, “Quality evaluation of multiresolution remote of MS $+$Pan data,” IEEE Transactions on Geoscience and sensing image fusion,” U.P.B. Scientific Bulletin Series C, Remote Sensing, vol. 45, no. 10, pp. 3230–3239, 2007. vol. 71, pp. 38–52, 2009. [22] V. Vijayaraj, “A quantitative analysis of pansharpened im- [39] I. Yuhendra, I. Alimuddin, J. T. S. Sumantyo, and H. Kuze, ages,” +esis, Faculty of Mississippi State University, Stark- “Assessment of pan-sharpening methods applied to image ville, MI, USA, 2004. fusion of remotely sensed multi-band data,” International [23] J. Zhou, D. L. Civco, and J. A. Silander, “A wavelet transform Journal of Applied Earth Observation and Geoinformation, method to merge Landsat TM and SPOT panchromatic data,” vol. 18, pp. 165–175, 2012. International Journal of Remote Sensing, vol. 19, no. 4, [40] R. Gharbia, A. E. Hassanien, A. H. El-Baz, M. Elhoseny, and pp. 743–757, 1998. M. Gunasekaran, “Multi-spectral and panchromatic image [24] A. Siddique, B. Xiao, W. Li, Q. Nawaz, and I. Hamid, “Multi- fusion approach using stationary wavelet transform and focus image fusion using block-wise color-principal com- swarm flower pollination optimization for remote sensing ponent analysis,” in Proceedings of the 2018 IEEE 3rd Inter- applications,” Future Generation Computer Systems, vol. 88, national Conference on Image, Vision and Computing pp. 501–511, 2018. (ICIVC). IEEE, Chongqing, China, June 2018. [41] X. X. Zhu and R. Bamler, “A sparse image fusion algorithm [25] S. S. Khan, “Research on image classification and fusion based with application to pan-sharpening,” IEEE Transactions on on machine learning techniques,” Master thesis, Beijing Geoscience and Remote Sensing, vol. 51, no. 5, pp. 2827–2836, University of Chemical Technology, Beijing, China, 2020. [26] Y. Yang, W. Wan, S. Huang, P. Lin, and Y. Que, “A novel pan- [42] Q. Du, N. H. Younan, R. King, and V. P. Shah, “On the sharpening framework based on matting model and multi- performance evaluation of pan-sharpening techniques,” IEEE scale transform,” Remote Sensing, vol. 9, no. 4, p. 391, 2017. Geoscience and Remote Sensing Letters, vol. 4, no. 4, [27] W. S. Mokrzycki and M. A. Samko, “Gradient based method pp. 518–522, 2007. of color edges finding,” in Book: Image Processing \\& [43] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image Communications Challenges, pp. 429–438, Exit, New Jersery, fusion using dictionary-based sparse representation,” Infor- NJ, USA, 2009. mation Fusion, vol. 25, pp. 72–84, 2015. [28] B. K. Shreyamsha Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal, Image and Video Pro- cessing, vol. 7, no. 6, pp. 1125–1143, 2013. [29] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 9, pp. 1124–1137, 2004. [30] P. Singh, M. Diwakar, X. Cheng, and A. Shankar, “A new wavelet-based multi-focus image fusion technique using method noise and anisotropic diffusion for real-time sur- veillance application,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1051–1068, 2021. [31] H.-M. Hua-Mei Chen, S. Seungsin Lee, R. M. Rao, M.-A. Slamani, and P. K. Varshney, “Imaging for concealed weapon detection: a tutorial overview of development in imaging sensors and processing,” IEEE Signal Processing Magazine, vol. 22, no. 2, pp. 52–61, 2005. [32] S. S. Khan, Q. Ran, and M. Khan, “Image pan-sharpening using enhancement based approaches in remote sensing,” Multimedia Tools and Applications, vol. 79, no. 43, pp. 32791–32805, 2020. [33] H. T. Mustafa, J. Yang, and M. Zareapoor, “Multi-scale convolutional neural network for multi-focus image fusion,” Image and Vision Computing, vol. 85, pp. 26–35, 2019. [34] M. Trentacoste, R. Mantiuk, W. Heidrich, and F. Dufrot, “Unsharp masking, countershading and halos: enhancements or artifacts?” Computer Graphics Forum, vol. 31, no. 2, 2012.

Journal

Journal of Healthcare EngineeringHindawi Publishing Corporation

Published: Dec 11, 2021

There are no references for this article.