Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The Use of the Combination of Texture, Color and Intensity Transformation Features for Segmentation in the Outdoors with Emphasis on Video Processing

The Use of the Combination of Texture, Color and Intensity Transformation Features for... agriculture Article The Use of the Combination of Texture, Color and Intensity Transformation Features for Segmentation in the Outdoors with Emphasis on Video Processing 1 1 , 2 Sajad Sabzi , Yousef Abbaspour-Gilandeh * , Jose Luis Hernandez-Hernandez , 3 4 Farzad Azadshahraki and Rouhollah Karimzadeh Department of Biosystems Engineering, College of Agriculture and Natural Resources, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran; sajadsabzi2@gmail.com Division of Research and Graduate Studies, Technological Institute of Chilpancingo, TecNM, Chilpancingo Guerrero 39070, Mexico; joseluis.hernandez@itchilpancingo.edu.mx Agricultural Engineering Research Institute, Agricultural Research, Education and Extension Organization (AREEO), Karaj 31585-845, Iran; farzad_shahrekian@yahoo.com Department of physics, Shahid Beheshti University, G.C., Tehran 19839, Iran; r_karimzadeh@sbu.ac.ir * Correspondence: abbaspour@uma.ac.ir; Tel.: +98-914-451-6255 Received: 3 April 2019; Accepted: 6 May 2019; Published: 9 May 2019 Abstract: Segmentation is the first and most important part in the development of any machine vision system with specific goals. Segmentation is especially important when the machine vision system works under environmental conditions, which means under natural light with natural backgrounds. In this case, segmentation will face many challenges, including the presence of various natural and artificial objects in the background and the lack of uniformity of light intensity in di erent parts of the camera's field of view. However, today, we must use di erent machine vision systems for outdoor use. For this reason, in this study, a segmentation algorithm was proposed for use in environmental conditions without the need for light control and the creation of artificial background using video processing with emphasizing the recognition of apple fruits on trees. Therefore, a video with more than 12 minutes duration containing more than 22,000 frames was studied under natural light and background conditions. Generally, in the proposed segmentation algorithm, five segmentation steps were used. These steps include: 1. Using a suitable color model; 2. Using the appropriate texture feature; 3. Using the intensity transformation method; 4. Using morphological operators; and 5. Using di erent color thresholds. The results showed that the segmentation algorithm had the total correct detection percentage of 99.013%. The highest sensitivity and specificity of segmentation algorithm were 99.224 and 99.458%, respectively. Finally, the results showed that the processor speed was about 0.825 seconds for segmentation of a frame. Keywords: texture features; color features; precision farming; machine vision; segmentation; video processing 1. Introduction Performing segmentation operations in accordance with the desired purpose has di erent complexities. In principle, segmentation operations in agriculture and horticulture are more complex than other sectors. This complexity is because of crowded backgrounds with various objects. In applications such as site-specific spraying and combat weeds, segmentation is the first step in the design of machine vision systems [1–4]. The segmentation involves various steps depending on the complexity of the background of the image and may include a combination of several methods. Therefore, the programmer skill is very important in this field. Generally, the conventional methods of Agriculture 2019, 9, 104; doi:10.3390/agriculture9050104 www.mdpi.com/journal/agriculture Agriculture 2019, 9, 104 2 of 14 segmentation are color index-based segmentation, threshold-based segmentation and learning-based segmentation [5] that in this research we used threshold-based segmentation method as one method that combine with other method. Bai et al. [6] believed that segmentation of vegetation cover from field images is a necessary issue. For this reason, they proposed a new segmentation method based on particle swarm optimization clustering and morphological modeling in a color space of L*a*b*. In this regard, they captured images at 10, 12 and 14 o’clock. The proposed method has two stages of oine learning and online segmentation. In the oine learning process, the number of optimized clusters was determined based on the training sample set. In the second step, each pixel was classified as vegetation class or non-vegetation class. In the last step, 200 images were used to test the proposed system. The results showed that the average segmentation quality of the images was 88.1% to 91.7%. Because of the successive droughts in the world and the consequent reduction in groundwater levels, as well as due to increased population, the need for agricultural water management is quite felt. One way to deal with water shortage is to use water only in areas where the crop has been cultivated, because it will only be consumed by the product and thus the amount of waste of water will be minimized. In large-scale lands, the possibility to achieve this without the use of new technologies is almost zero. For this reason, it is possible to detect places with crops using machine vision systems. In this regard Hernandez et al. [7] believed that in order to achieve precision agriculture goals, not only new technologies but also software development were necessary. Therefore, they provided a new application of the machine vision system in the form of an automated segmentation system of plants from background to monitor the cabbage during growth, to provide product information that is needed to estimate the amount of water required for the crop. Their proposed system consisted of three main steps: 1) imaging and cutting, 2) analyzing the image, and 3) recording information. In order to imaging, wooden frames were installed inside the farm, and then the images were taken. In order to train the proposed system, 1106 cabbage samples were used. The results showed that the proposed system had a 20% error in counting the cabbage. In another study Tang et al. [8] provided a multi-inference trees segmentation method to better manage farms. The algorithm worked out based on image features and user requirements. In fact, the algorithm learns the related rules according to these two, and then applies the color space, the transformation to the gray image method, the de-noising method, the local segmentation method, and the morphology processing method after applying each rule. To train the proposed algorithm, 2082 images with a resolution of 1932  2576 were captured. A manual method was also used to evaluate intelligent results. The results showed that the intelligent processing assessment rate for more than 80 points was above 83% with an average of 75%. The processing time of each image needs about 23 seconds. Liu et al. [9] developed a machine vision system for segmentation of apple fruit based on color and position information. This system performs segmentation based on artificial light with low brightness. The proposed method had two main steps: the first step included the training of artificial neural network using RGB and HIS color space components and finally, it included the providing a model for apple fruit segmentation. Due to the presence of shadows in some parts of the apple (due to the no uniformity of light), the segment is not performed properly in this step, therefore, in order to complete the segmentation, a second stage, which also considers the color and position of the pixels surrounding the segmented area, is added. In their study, 20 apple fruits were used. The results showed that the proposed system has acceptable results in the segmentation of these apples. As it was observed, the research focused on segmentation in two simple background states, such as separating the plant from the soil or having a segment on the artificial background. On the other hand, all research focused on analyzing the images taken. To formulate an appropriate segmentation algorithm for apple fruits under natural light conditions and a completely natural background in the presence of various objects such as leaves of tree, thin branches, thick branches, tree trunk, blue sky, cloudy sky, green plants, yellow plants, harvested fruits, and baskets, there is no possibility to use the results of researches done because of two reasons: 1. backgrounds are very complex and have di erent Agriculture 2019, 9, 104 3 of 14 objects with di erent colors. 2. Camera movement in gardens needs for di erent operations, such as site-specific spraying, for this reason, the frames have no good quality. Therefore, the purpose of this study is to develop a segmentation algorithm for working in a completely natural environment both in terms of light and in terms of backgrounds using video processing, with emphasis on the segmentation of apples on trees. In the last years horticulture has been one of the most important research objects in many universities in the world. We have found that the main works are directed to the recognition of fruits, counting, detection of plants, monitoring of irrigation, etc. 2. Materials and Methods Each machine or computer vision system for development needs to have di erent stages, such as the filming stage, the stage of analysis, and so on. In this study, as same as other machine vision system, the steps have been designed to train the system. Some of these steps can be filming, examining di erent color models, extracting various texture features, employing di erent morphological operators and using the intensity transformation method. 2.1. Data Collection In this study, a digital camera (DFK 23GM021 specification, CMOS, 120 f/s, Imaging Source GmbH, Bremen, Germany) was used to filming from apple gardens in Kermanshah province-Iran. In Table 1, shows the related details of one of the videos from these gardens. As observed, there is a video more than 12 minutes that contains more than 22,000 frames that has been recorded in di erent days, times of the day, and weather conditions. Since the capability of performing of the segmentation system in di erent light intensities is an essential principle, the video was provided in full natural light conditions throughout the day with a completely natural background. Some light intensities were 398 lux, 1096 lux, 692 lux, 1591 lux, 1923 lux, 894 lux, 2010 lux, 918 lux, 798 lux, 493 lux and 579 lux. Table 1. Characteristics of video studied. Number Parameter Time/Number 1 Filming time More than 12 minutes 2 Extracted frames 22,001 3 Training frames 15,401 (70% of all frames) 4 Testing frames 6,600 (30% all frames) 5 Background objects in test mode 60,125 6 Number of apples in test mode 42,750 We collected several films of orchards, but we used only 12 min (22,001 frames) of them for the training of the algorithm. We had filming in four stages of ripening, including Unripe (20 days before maturity), Half-ripe (10 days before maturity), Ripe and Overripe (10 days after maturity), that combined them for the training of algorithm. The distance from the trees was between 0.5 and 2 m, the speed was around 1 m/s, and the viewing angle was nearly parallel to the ground. The camera was manually held, simulating a low-medium height flight of a drone. With the system described, which has a horizontal viewing angle around 80o an apple about 7 cm would be observed with a size of 20 pixels at a distance about 3 m over the trees. Apple variety was Malus Domestica L., var. Red Delicious. We had filming in 4 stages of ripening include Unripe, Half-ripe, Ripe and Overripe. 2.2. Various Color Models An image in di erent color models has di erent colors. In fact, di erent objects in one image, in each color model, will have di erent colors. This feature can be used to distinguish between di erent Agriculture 2019, 9, 104 4 of 14 background objects and apples. For this investigation, 17 color spaces were investigated [10,11], which are shown in Table 2. Table 2. Various color models examined. Number Color Model Number Color Model Number Color Model 1 RGB 7 HSI 13 YPbPr 2 HSV 8 Improved YCbCr 14 YUV 3 YIQ 9 L*a*b* 15 HSL 4 YCbCr 10 JPEG-YCbCr 16 XYZ 5 CMY 11 YDbDr 17 Luv 6 LCH 12 CAT02 LMS 2.3. Extraction of Texture Features Intuitively, the texture of a region can be described by roughness and softness. In fact, di erent regions in one image can be formed from a very rough to very soft modes. Mathematically, there are several methods for describing the texture. One of these methods is the texture features based on the gray level co-occurrence matrix (GLCM) extracted from the position of the pixels with same values. In fact, this method presents an average of the entire area in which the texture is examined. Therefore, this study is not applicable because here it is necessary to examine the texture of all pixels. Another method is to measure the spectral range of the texture based on the Furrier spectrum. This spectrum describes periodic or nearly periodic two-dimensional patterns in an image. The Furrier spectrum performs spectral measurement in a polar coordinate system (i.e., based on radius and angle), since spectral properties are interpreted by describing the spectrum in polar coordinates as a simple function of S (r, ). In this function S is the spectral function and r and  are variables of the polar system. Therefore, the function of S (r, ) can be considered as two one-dimensional functions of S(r) and Sr() for each direction  and each frequency r. S(r) for the constant values of  shows the behavior of the spectrum along the radius, while Sr() for the constant values of r shows the behavior of the spectrum along a circle with the center of origin [10]. This method, like the previous method, provides the mean value for the entire area. In the third method, textural descriptors are applied to the entire image pixels, and finally, the results are also observed intuitively. Therefore, in this study, the texture features of local entropy, local standard deviation and local range were investigated. 2.4. Application of Morphological Operators Outdoor operations under natural light with complex backgrounds are particularly sensitive, as unpredictable noise and e ects can make it dicult to achieve the desired goal. One of the most important methods for removing these noise and unpredicted factors is the use of morphological operators. These operators include a wide range of operators, such as opening, closing, filling holes, deleting border pixels, removing objects with pixels less than threshold values, thinning, thickening, and others. In the proposed segmentation algorithm, opening, closing, filling holes and deleting objects with a number of pixels less than 100 were used at di erent stages. This threshold value was selected with trial and error and with the consideration of not removing the apple pixels. The process of mathematical morphology in computational terms, consists of moving all the pixels of the image from left to right and from top to bottom in order to find isolated pixels, which are considered noise [12]. This noise is eliminated by applying erosion and dilation with the following equations: Open = (B E)  E (1) Close = (B  E) E (2) The operation open, disappears the fine points or fine structures and the operation close, fill the black holes of a certain size. Agriculture 2019, 9, 104 5 of 14 2.5. The Importance of Using Intensity Transformation Agriculture 2019, 9, x FOR PEER REVIEW 5 of 14 In segmentation, we are looking for methods that eliminate background objects and prevent the In segmentation, we are looking for methods that eliminate background objects and prevent the removal of target object pixels. The use of the intensity transformation method by limiting the pixel removal of target object pixels. The use of the intensity transformation method by limiting the pixel intensity variation in the desired range provides more di erences between di erent objects. Therefore, intensity variation in the desired range provides more differences between different objects. in this study, a part of the segmentation operation was performed by changing the intensity from 0 Therefore, in this study, a part of the segmentation operation was performed by changing the and 1 to 0 and 0.6 with applying the threshold of 95. Since image were in uint8 data class pixels were intensity from 0 and 1 to 0 and 0.6 with applying the threshold of 95. Since image were in uint8 data multiplied in 225. class pixels were multiplied in 225. 2.6. Di erent Stages in the Elaboration of Segmentation Algorithm 2.6. Different Stages in the Elaboration of Segmentation Algorithm In Figure 1 shows the main steps in creating a segmentation algorithm. As observed, there are 11 In figure 1 shows the main steps in creating a segmentation algorithm. As observed, there are 11 main stages in this algorithm. main stages in this algorithm. Figure 1. Di erent stages in the development of segmentation algorithm. Figure 1. Different stages in the development of segmentation algorithm. 3. Results and Discussion 3. Results and Discussion 3.1. The Most Suitable Color Model for the First Stage of Segmentation 3.1. The Most Suitable Color Model for the First Stage of Segmentation In Figure 2 shows a sample image in six di erent color models. As observed, objects in di erent In figure 2 shows a sample image in six different color models. As observed, objects in different color models have di erent colors. The most suitable color space for segmentation is the color space color models have different colors. The most suitable color space for segmentation is the color space with a minimum number of colors and the display of all the objects in the image, because there is the with a minimum number of colors and the display of all the objects in the image, because there is the possibility of using threshold or thresholds with a very high accuracy. These images show that the possibility of using threshold or thresholds with a very high accuracy. These images show that the worst color model is the LCH because it shows almost all the objects of the image in white. Other worst color model is the LCH because it shows almost all the objects of the image in white. Other color models expect Luv have shown di erent objects with a large number of colors, which causes color models expect Luv have shown different objects with a large number of colors, which causes applying the threshold be dicult. The Luv color model has been able to represent various objects in applying the threshold be difficult. The Luv color model has been able to represent various objects in the image with almost three colors. In fact, in this image, the leaves are shown in purple color, which the image with almost three colors. In fact, in this image, the leaves are shown in purple color, which led to performing a part of the segmentation based on this feature. Finally, using trial and error, it was led to performing a part of the segmentation based on this feature. Finally, using trial and error, it was determined that if all the pixel components in the image of the Luv color model are more than 115, then those pixels are related to the background and should be deleted. Agriculture 2019, 9, x FOR PEER REVIEW 6 of 14 3.2. Texture Feature with High Performance in the Second Segmentation Process Figure 3 illustrates the results of applying the three features of texture of local range, local entropy and local standard deviations. As observed, the images extracted from the local range and local standard deviation methods are very similar expect that the edges of the objects in the image resulted from the local area is darker. The images from these two methods represent more objects compared with the local entropy method. Therefore, finally, the image resulted from applying the Agriculture 2019, 9, 104 6 of 14 texture feature of local range was studied as the target image to apply another step of the segmentation. In fact, this image converted into a binary image, and then segmentation was determined that if all the pixel components in the image of the Luv color model are more than 115, performed by applying the threshold 1. This threshold means that if the image pixels have a value then those pixels are related to the background and should be deleted. equal to 1, those pixels belong to the background and should be deleted. Figure 2. Sample image in six di erent color models. (a): RGB color model, (b): Improvement YCbCr color model, (c): LCH color model, (d): HSL color model, (e): HSI color model, (f): Luv color model. Figure 2. Sample image in six different color models. (a): RGB color model, (b): Improvement YCbCr color model, (c): LCH color model, (d): HSL color model, (e): HSI color model, (f): Luv color model. 3.2. Texture Feature with High Performance in the Second Segmentation Process Figure 3 illustrates the results of applying the three features of texture of local range, local entropy and local standard deviations. As observed, the images extracted from the local range and local standard deviation methods are very similar expect that the edges of the objects in the image resulted from the local area is darker. The images from these two methods represent more objects compared with the local entropy method. Therefore, finally, the image resulted from applying the texture feature of local range was studied as the target image to apply another step of the segmentation. In fact, this image converted into a binary image, and then segmentation was performed by applying the threshold 1. This threshold means that if the image pixels have a value equal to 1, those pixels belong to the background and should be deleted. Agriculture 2019, 9, 104 7 of 14 Agriculture 2019, 9, x FOR PEER REVIEW 7 of 14 Figure 3. Texture feature with high performance in the second segmentation process. (a): The original Figure 3. Texture feature with high performance in the second segmentation process. (a): The original image, (b): The image obtained by applying the local range feature, (c): The image obtained by applying image, (b): The image obtained by applying the local range feature, (c): The image obtained by the local entropy feature, (d): The image obtained by applying the local standard deviation feature. applying the local entropy feature, (d): The image obtained by applying the local standard deviation feature. 3.3. Intensity Transformation Performance in the Third Step of Segmentation Figure 4 shows an image of the intensity transformation performance. Figure 4a shows the main 3.3. Intensity Transformation Performance in the Third Step of Segmentation studied image. As observed, this image has various objects such as green leaves in the shade, green Figure 4 shows an image of the intensity transformation performance. Figure 4a shows the main leaves in the sun, soil, green plants in the shade, green plants in the sun, tiny branches, thick branches, studied image. As observed, this image has various objects such as green leaves in the shade, green tree trunks, and others. Figure 4b shows the image segmented in the two previous steps by color and leaves in the sun, soil, green plants in the shade, green plants in the sun, tiny branches, thick branches, texture methods. As observed, most of the relevant branches and trunks of trees remained without any tree trunks, and others. Figure 4b shows the image segmented in the two previous steps by color and change. Figure 4c shows the image of the intensity transformation. Eventually, by applying thresholds texture methods. As observed, most of the relevant branches and trunks of trees remained without 95 on image Figure 4c, the image shown in Figure 4d was obtained. By comparing this image and the any change. Figure 4c shows the image of the intensity transformation. Eventually, by applying image of Figure 4b, it is clear that many parts of the trunk and branches have been deleted. thresholds 95 on image Figure 4c, the image shown in Figure 4d was obtained. By comparing this 3.4. The Performance of Segmentation Algorithm in Di erent Modes of Ordering Color, Texture, Intensity image and the image of Figure 4(b), it is clear that many parts of the trunk and branches have been Transformation Methods deleted. One of the innovations of this research is to arrange the sequence of di erent segmentation 3.4. The Performance of Segmentation Algorithm in Different Modes of Ordering Color, Texture, Intensity methods. Figure 5 shows three di erent sequences of texture, color, and intensity transformation Transformation Methods methods. Figure 5a shows the original image. Figure 5b shows the image segmented before applying the color thresholds with the sequence of the texture method, the color method and the intensity One of the innovations of this research is to arrange the sequence of different segmentation transformation method. As observed, the segmentation accuracy is very low, and many of the relevant methods. Figure 5 shows three different sequences of texture, color, and intensity transformation apple segments have been deleted, while many background pixels are remained. Figure 5c shows the methods. Figure 5a shows the original image. Figure 5b shows the image segmented before applying segmented image of Figure 5a with the sequence of the intensity transformation method, the texture the color thresholds with the sequence of the texture method, the color method and the intensity method and the color method. This sequence of methods has better performance than the previous transformation method. As observed, the segmentation accuracy is very low, and many of the one, but in general, it has a low accuracy. Figure 5d shows the segmented image of the segmentation relevant apple segments have been deleted, while many background pixels are remained. Figure 5c algorithm with the sequence of the color method, the texture method and the intensity transformation shows the segmented image of Figure 5a with the sequence of the intensity transformation method, method. As observed, the algorithm has a very good performance. In fact, by using this sequence, a the texture method and the color method. This sequence of methods has better performance than the large part of the background was removed and the pixels of the apple were not deleted. previous one, but in general, it has a low accuracy. Figure 5d shows the segmented image of the segmentation algorithm with the sequence of the color method, the texture method and the intensity transformation method. As observed, the algorithm has a very good performance. In fact, by using this sequence, a large part of the background was removed and the pixels of the apple were not deleted. Agriculture 2019, 9, x FOR PEER REVIEW 8 of 14 Agriculture 2019, 9, 104 8 of 14 Agriculture 2019, 9, x FOR PEER REVIEW 8 of 14 Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): Image Image segmented after applying the threshold on the image of the intensity transformation. (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): segmented after applying the threshold on the image of the intensity transformation. Image segmented after applying the threshold on the image of the intensity transformation. Figure 5. The performance of segmentation algorithm in di erent modes of ordering color, texture, Figure 5. The performance of segmentation algorithm in different modes of ordering color, texture, intensity transformation methods. (a) Original image. (b) The resulting image after applying the intensity transformation methods. (a) Original image. (b) The resulting image after applying the sequence of the texture method, the color method and the intensity transformation method. (c) The Figure 5. The performance of segmentation algorithm in different modes of ordering color, texture, sequence of the texture method, the color method and the intensity transformation method. (c) The resulting image after applying the sequence of the intensity transformation method, the texture method, intensity transformation methods. (a) Original image. (b) The resulting image after applying the resulting image after applying the sequence of the intensity transformation method, the texture the color method. (d) The image obtained after applying the sequence of the color method, the texture sequence of the texture method, the color method and the intensity transformation method. (c) The method, the color method. (d) The image obtained after applying the sequence of the color method, method, the intensity transformation method. resulting image after applying the sequence of the intensity transformation method, the texture the texture method, the intensity transformation method. method, the color method. (d) The image obtained after applying the sequence of the color method, the texture method, the intensity transformation method. 3.5. Applying Thresholding Function to Complete the Segmentation Process 3.5. Applying Thresholding Function to Complete the Segmentation Process Agriculture 2019, 9, 104 9 of 14 3.5. Applying Thresholding Function to Complete the Segmentation Process After completing the first part of the segmentation, which involves applying thresholds using di erent methods and the sequence of the methods in the segmentation algorithm, it is necessary to implement the second part of the segmentation to complete the segmentation process due to the presence of small objects in the background. Due to the sensitivity of the work, in this study, a thresholding function related to RGB color space channels were used for final segmentation in a comprehensive segmentation algorithm by exact study of frames and considerations so as to not remove apple pixels. After survey di erent images in light di erent conditions such as shadow and sunny modes as well as various objects on the trees, 10-color threshold for function training were select. In fact, this function is based on pixel. Each pixel survey individually and the values of RGB color space components compare with 10-color threshold. This function has two outputs: 0 and 1. When the output is 0, it means that pixel is related to background and when the output is 1 it means that pixel is related to apples. These thresholds have been shown in Table 3. Figure 6 shows two sample images for displaying the performance of a number of thresholds. The target objects have been shown with bold blue lines. Other objects that are left in the images and not in the right image were removed by other thresholds. Table 3. Di erent thresholds to remove background pixels remaining from previous steps. Number Thresholds 1 FR(i,j)>90 & FR(i,j)<=110 & FG(i,j)>55 & FG(i,j)<80 & FB(i,j)>40 & FB(i,j)<62 & abs(FG(i,j) FB(i,j))<20; 2 FR(i,j)>92 & FR(i,j)<=102 & FG(i,j)>82 & FG(i,j)<94 & FB(i,j)>20 & FB(i,j)<35 & abs(FR(i,j) FG(i,j))<15; 3 FR(i,j)>115 & FR(i,j)<=130 & FG(i,j)>100 & FG(i,j)<115 & FB(i,j)>49 & FB(i,j)<55 & abs(FR(i,j) FG(i,j))<20; 4 FR(i,j)>102 & FR(i,j)<=125 & FG(i,j)>85 & FG(i,j)<105 & FB(i,j)>35 & FB(i,j)<60 & abs(FR(i,j) FG(i,j))<25; 5 FR(i,j)>98 & FR(i,j)<=108 & FG(i,j)>82 & FG(i,j)<90 & FB(i,j)>22 & FB(i,j)<38 & abs(FR(i,j) FG(i,j))<25; 6 FR(i,j)>120 & FR(i,j)<=128 & FG(i,j)>110 & FG(i,j)<118 & FB(i,j)>40 & FB(i,j)<55 & abs(FR(i,j) FG(i,j))<15; 7 FR(i,j)>190 & FR(i,j)<=202 & FG(i,j)>179 & FG(i,j)<190 & FB(i,j)>48 & FB(i,j)<53 & abs(FR(i,j) FG(i,j))<20; 8 FR(i,j)>100 & FR(i,j)<=110 & FG(i,j)>75 & FG(i,j)<90 & FB(i,j)>68 & FB(i,j)<82 & abs(FG(i,j) FB(i,j))<15; 9 FR(i,j)>=142 & FR(i,j)<=167 & FG(i,j)>=120 & FG(i,j)<139 & FB(i,j)>=67 & FB(i,j)<97 & abs(FR(i,j) FG(i,j))<30; Agriculture 10 2019FR(i,j) , 9, x FO >=R P 95 & EER FR(i,j) RE<= VIEW 115 & FG(i,j)>=49 & FG(i,j)<70 & FB(i,j)>=25 & FB(i,j)<50 & abs(FG(i,j) FB(i,j))<25; 10 of 14 Figure 6. Applying di erent color thresholds to complete the segmentation process, (a): The image Figure 6. Applying different color thresholds to complete the segmentation process, (a): The image before applying threshold, (b): The image after applying threshold 6 in Table 3, (c): The image before before applying threshold, (b): The image after applying threshold 6 in Table 3, (c): The image before applying threshold, (d): The image after applying threshold 5 in Table 3. applying threshold, (d): The image after applying threshold 5 in Table 3. 3.6. Accuracy of Comprehensive Segmentation Algorithm Table 4 shows the average percentage of background pixels removed by each segmentation method in comprehensive segmentation algorithm. This table shows that the highest percentage of background pixels removed is related to the method of threshold in Luv color space with value 36%. As this table shows, no single method can do segmentation operations alone. Therefor a combination of different method is need for segmentation with high accuracy. The combination of different segmentation techniques and their arrangement can be considered as an innovation. Table 5 shows the confusion matrix of thresholding function. As this table shows the error of this segmentation method is less than 0.8. Table 6 shows the confusion matrix and the percentage of the total detection of the proposed segmentation algorithm. As observed, objects in the images are divided into two classes of apple and background objects. This table shows that 324 samples out of 42,750 apple samples are mistakenly located in the background objects class by the segmentation algorithm, so the segmentation algorithm has 0.758% error in this class. This algorithm also mistakenly classified 691 samples of the objects in the background with the total members of 60125 in the apple class. This leads to a 1.15% error in segmentation algorithm for this class. Finally, the percentage of total detection of the segmentation algorithm is 99.013%. This accuracy is very good for this sample number, which proves the algorithm was configured properly. Table 4. The average percentage of background pixels removed by each segmentation method. The Average Percentage of Background Pixels Removed by Main Segmentation Methods Each Method The use of threshold in Luv color space The use of texture feature 26 The use of morphological operators Agriculture 2019, 9, 104 10 of 14 3.6. Accuracy of Comprehensive Segmentation Algorithm Table 4 shows the average percentage of background pixels removed by each segmentation method in comprehensive segmentation algorithm. This table shows that the highest percentage of background pixels removed is related to the method of threshold in Luv color space with value 36%. As this table shows, no single method can do segmentation operations alone. Therefor a combination of di erent method is need for segmentation with high accuracy. The combination of di erent segmentation techniques and their arrangement can be considered as an innovation. Table 5 shows the confusion matrix of thresholding function. As this table shows the error of this segmentation method is less than 0.8. Table 4. The average percentage of background pixels removed by each segmentation method. The Average Percentage of Background Pixels Main Segmentation Methods Removed by Each Method The use of threshold in Luv color space 36 The use of texture feature 26 The use of morphological operators 23 The use of thresholding function 15 Table 5. Confusion matrix of thresholding function. Classes correspond to: 1: Apple object pixels, 2: Background object pixels. Predicted/Real Classification Error Classification 1 2 All Data Class by Class (%) Accuracy (%) 1 52,139 429 52,568 0.816 99.20% 2 389 49,123 49,512 0.785 Table 6 shows the confusion matrix and the percentage of the total detection of the proposed segmentation algorithm. As observed, objects in the images are divided into two classes of apple and background objects. This table shows that 324 samples out of 42,750 apple samples are mistakenly located in the background objects class by the segmentation algorithm, so the segmentation algorithm has 0.758% error in this class. This algorithm also mistakenly classified 691 samples of the objects in the background with the total members of 60125 in the apple class. This leads to a 1.15% error in segmentation algorithm for this class. Finally, the percentage of total detection of the segmentation algorithm is 99.013%. This accuracy is very good for this sample number, which proves the algorithm was configured properly. Table 6. Confusion matrix and total percentage of proposed segmentation algorithm. Background Total Percentage of Total Percentage of Class Apples All Data Objects Wrong Diagnosis Correct Diagnosis Apples 42,426 324 42,750 0.758 99.013% Background objects 691 59,434 60,125 1.15 3.7. Performance of Segmentation Algorithm To evaluate the performance of segmentation algorithm, three criteria of sensitivity, specificity and accuracy were used. Based on definition, sensitivity expresses the wrong placement of the samples of the studied class and the specificity expresses the wrong placement of the other class samples in the studied class. Finally, the accuracy is the percentage of total placement of the correct samples in their classes. These three criteria are expressed using Equations 1 to 3. TP Sensitivity = (3) TP + FN Agriculture 2019, 9, 104 11 of 14 TN Specificity = (4) FP + TN TP + TN Accuracy = (5) TP + TN + FP + FN TP is the number of samples of each class that are correctly classified; TN is the number of samples on the main diameter of the confusion matrix minus the number of samples of the studied classes. FN is the sum of the horizontal samples of the class examined minus the number of samples of the studied class. Finally, FP is the sum of the vertical samples of the studied class minus the number of samples of the same class [13]. In Table 7 shows the results of the performance criteria of segmentation algorithm. Based on this table, it is shown that the highest sensitivity is related to the apple class with the value of 99.242 percent and the highest specificity is related to the class of background objects with a value of 99.458 percent. In Figure 7 shows a pseudo code of segmentation algorithm. In pseudo code explain final segmentation algorithm in 13 stages. Table 7. Results of performance criteria of segmentation algorithm. Class Sensitivity Accuracy Specificity Apples 99.242 99.013 98.397 Background objects 98.851 99.013 99.458 Agriculture 2019, 9, x FOR PEER REVIEW 12 of 14 Figure 7. A pseudo code of final segmentation algorithm. Figure 7. A pseudo code of final segmentation algorithm. 3.8. The Speed of the Segmentation Algorithm The system used to analyze, run the segmentation algorithm and detect background objects and apples was a laptop with Intel Core i3 CFI processor, 330M at 2.13 GHz, 4 GB of RAM-4GB and Windows 10. The results showed the processor speed was about 0.825 seconds for the segmentation of a frame. This speed is very good for this research because the background of the frames was very complex and full of different objects. After presenting the segmentation algorithm and reviewing the performance of the algorithm, it is necessary to compare the results with the results of other researchers. Due to the novelty of the proposed method as well as the different filming conditions, there is no possibility for direct comparison of the results. However, two studies, conducted by Zhao et al. [14] and Aquino et al. [15] were used to compare the proposed method. Zhao et al. [14] provided a method for detecting immature green citrus in citrus gardens. Aquino et al. [15] proposed a segmentation-based method for counting the number of grape cubes associated with a cluster in color images and in controlled light conditions. The results of this study are shown in Table 8. As Table 8 shows, the proposed method in this study with a higher number of samples than the other two studies has a higher detection rate. After comparing with other research, we mention advantages of the proposed method. The advantages are: 1. High process speed; 2. High accuracy; 3. Usability in natural conditions of the orchard; 4. Usability in different orchards; 5. Usability in segmentation of different fruit within trees in orchard. This algorithm can be used for different purposes such as: 1. The use in fruit picking Agriculture 2019, 9, 104 12 of 14 3.8. The Speed of the Segmentation Algorithm The system used to analyze, run the segmentation algorithm and detect background objects and apples was a laptop with Intel Core i3 CFI processor, 330M at 2.13 GHz, 4 GB of RAM-4GB and Windows 10. The results showed the processor speed was about 0.825 seconds for the segmentation of a frame. This speed is very good for this research because the background of the frames was very complex and full of di erent objects. After presenting the segmentation algorithm and reviewing the performance of the algorithm, it is necessary to compare the results with the results of other researchers. Due to the novelty of the proposed method as well as the di erent filming conditions, there is no possibility for direct comparison of the results. However, two studies, conducted by Zhao et al. [14] and Aquino et al. [15] were used to compare the proposed method. Zhao et al. [14] provided a method for detecting immature green citrus in citrus gardens. Aquino et al. [15] proposed a segmentation-based method for counting the number of grape cubes associated with a cluster in color images and in controlled light conditions. The results of this study are shown in Table 8. As Table 8 shows, the proposed method in this study with a higher number of samples than the other two studies has a higher detection rate. Table 8. Comparison of the results obtained in this study with two other studies. Method Number of Samples Correct Detection Rate (Percent) Proposed method 102,875 (test data) 99.013 [14] 68 83 [15] 152 95.72 After comparing with other research, we mention advantages of the proposed method. The advantages are: 1. High process speed; 2. High accuracy; 3. Usability in natural conditions of the orchard; 4. Usability in di erent orchards; 5. Usability in segmentation of di erent fruit within trees in orchard. This algorithm can be used for di erent purposes such as: 1. The use in fruit picking robots with emphasis on apple fruit; 2. The use in automatic systems to estimation of fruit yields with emphasis on apple fruit; 3. The use in automatic systems to survey fruits in grow stags with emphasis on apple fruit. 4. Conclusions In this study, a new method was developed for segmentation of apple fruits on trees under natural light conditions without using any artificial background with emphasis on video processing. The most important results are: 1. The most important challenges for the development of segmentation algorithm were the presence of di erent objects with di erent colors in a background. For example, a number of these objects include trees trunk in the shade, trunks of trees in the sun, tiny brunches in the shade, tiny brunches in the sun, tiny branches connected to trunks, green leaves in the sun, green leaves in the shade, pestle leaves, green plants, yellow plants, cloudy sky, sunny sky, artificial objects such as nylon, baskets, harvested apples, flakes and so on. 2. Appropriate color model among 17 color models examined was Luv. In fact, this model eliminates many leaves in the first stage. 3. The proper feature for performing the second stage of segmentation among the three texture features of local range, local entropy and local standard deviation was the local range. 4. The use of the intensity transformation method eliminated a large part of the pixels related to the trunk and tree branches. 5. The use of morphological operators in di erent stages of segmentation is necessary. 6. The use of color thresholds in the final stage of segmentation eliminates objects that have remained in the previous stages. Agriculture 2019, 9, 104 13 of 14 7. Results showed that the percentage of total detection of segmentation algorithm was 99.013%. 8. The highest sensitivity was related to apple class with the value of 99.242% and the highest specificity was related to the class of background objects with a value of 99.458%. 9. The results showed that the processor speed was about 0.825 seconds for the segmentation of a frame. For future work, a fruit recognition system should be implemented and vegetables to improve recognition functionality and flexibility for wider use. The process should be improved by extending its functions to process and recognize more variety of di erent fruit images. Besides that, a texture-based analysis technique could be combined with the existing three features analysis technique on the system in order to gain better discerning of di erent fruit images. Author Contributions: Conceptualization, S.S. and Y.A.-G.; methodology, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; software, S.S.; validation, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; formal analysis, S.S., and J.L.H.-H.; investigation, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; resources, S.S.; data curation, S.S.; writing—original draft preparation, S.S.; writing—review and editing, J.L.H.-H.; visualization, S.S.; supervision, Y.A.-G.; project administration, Y.A.-G.; funding acquisition, Y.A.-G. Funding: This study was financially supported by Iran National Science Foundation (INSF) through the research project 96007466. Conflicts of Interest: The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. References 1. Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support Vector Machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012, 39, 11149–11155. [CrossRef] 2. Montalvo, M.; Guerrero, J.M.; Romeo, J.; Emmi, L.; Guijarro, M.; Pajares, G. Automatic expert system for weeds/crops identification in images from maize fields. Expert Syst. Appl. 2013, 40, 75–82. [CrossRef] 3. Romeo, J.; Guerrero, J.M.; Montalvo, M.; Emmi, L.; Guijarro, M.; Gonzalez-De-Santos, P.; Pajares, G. Camera Sensor Arrangement for Crop/Weed Detection Accuracy in Agronomic Images. Sensors 2013, 13, 4348–4366. [CrossRef] [PubMed] 4. Arroyo, J.; Guijarro, M.; Pajares, G. An instance-based learning approach for thresholding in crop images under di erent outdoor conditions. Comput. Electron. Agric. 2016, 127, 669–679. [CrossRef] 5. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [CrossRef] 6. Bai, X.; Cao, Z.; Wang, Y.; Yu, Z.; Hu, Z.; Zhang, X.; Li, C. Vegetation segmentation robust to illumination variations based on clustering and morphology modelling. Biosyst. Eng. 2014, 125, 80–97. [CrossRef] 7. Hernández-Hernández, J.; Ruiz-Hernández, J.; García-Mateos, G.; González-Esquiva, J.; Ruiz-Canales, A.; Molina-Martínez, J. A new portable application for automatic segmentation of plants in agriculture. Agric. Water Manag. 2017, 183, 146–157. [CrossRef] 8. Tang, J.; Miao, R.; Zhang, Z.; He, D.; Liu, L. Decision support of farmland intelligent image processing based on multi-inference trees. Comput. Electron. Agric. 2015, 117, 49–56. [CrossRef] 9. Liu, X.; Zhao, D.; Jia, W.; Ruan, C.; Tang, S.; Shen, T. A method of segmenting apples at night based on color and position information. Comput. Electron. Agric. 2016, 122, 118–123. [CrossRef] 10. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB; Prentice Hall: Upper Saddle River, NJ, USA, 2004. 11. Hernández-Hernández, J.; García-Mateos, G.; González-Esquiva, J.; Escarabajal-Henarejos, D.; Ruiz-Canales, A.; Molina-Martínez, J. Optimal color space selection method for plant/soil segmentation in agriculture. Comput. Electron. Agric. 2016, 122, 124–132. [CrossRef] 12. Li, Y.; Zuo, M.J.; Lin, J.; Liu, J. Fault detection method for railway wheel flat using an adaptive multiscale morphological filter. Mech. Syst. Signal Process. 2017, 84, 642–658. [CrossRef] Agriculture 2019, 9, 104 14 of 14 13. Wisaeng, K. A comparison of decision tree algorithms for UCI repository classification. Int. J. Eng. Trends Technol. 2013, 4, 3397–3401. 14. Zhao, C.; Lee, W.S.; He, D. Immature green citrus detection based on colour feature and sum of absolute transformed di erence (SATD) using colour images in the citrus grove. Comput. Electron. Agric. 2016, 124, 243–253. [CrossRef] 15. Aquino, A.; Diago, M.P.; Millán, B.; Tardáguila, J. A new methodology for estimating the grapevine-berry number per cluster using image analysis. Biosyst. Eng. 2017, 156, 80–95. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Agriculture Multidisciplinary Digital Publishing Institute

The Use of the Combination of Texture, Color and Intensity Transformation Features for Segmentation in the Outdoors with Emphasis on Video Processing

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/the-use-of-the-combination-of-texture-color-and-intensity-fiHyCps71X

References (16)

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2019 MDPI (Basel, Switzerland) unless otherwise stated
ISSN
2077-0472
DOI
10.3390/agriculture9050104
Publisher site
See Article on Publisher Site

Abstract

agriculture Article The Use of the Combination of Texture, Color and Intensity Transformation Features for Segmentation in the Outdoors with Emphasis on Video Processing 1 1 , 2 Sajad Sabzi , Yousef Abbaspour-Gilandeh * , Jose Luis Hernandez-Hernandez , 3 4 Farzad Azadshahraki and Rouhollah Karimzadeh Department of Biosystems Engineering, College of Agriculture and Natural Resources, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran; sajadsabzi2@gmail.com Division of Research and Graduate Studies, Technological Institute of Chilpancingo, TecNM, Chilpancingo Guerrero 39070, Mexico; joseluis.hernandez@itchilpancingo.edu.mx Agricultural Engineering Research Institute, Agricultural Research, Education and Extension Organization (AREEO), Karaj 31585-845, Iran; farzad_shahrekian@yahoo.com Department of physics, Shahid Beheshti University, G.C., Tehran 19839, Iran; r_karimzadeh@sbu.ac.ir * Correspondence: abbaspour@uma.ac.ir; Tel.: +98-914-451-6255 Received: 3 April 2019; Accepted: 6 May 2019; Published: 9 May 2019 Abstract: Segmentation is the first and most important part in the development of any machine vision system with specific goals. Segmentation is especially important when the machine vision system works under environmental conditions, which means under natural light with natural backgrounds. In this case, segmentation will face many challenges, including the presence of various natural and artificial objects in the background and the lack of uniformity of light intensity in di erent parts of the camera's field of view. However, today, we must use di erent machine vision systems for outdoor use. For this reason, in this study, a segmentation algorithm was proposed for use in environmental conditions without the need for light control and the creation of artificial background using video processing with emphasizing the recognition of apple fruits on trees. Therefore, a video with more than 12 minutes duration containing more than 22,000 frames was studied under natural light and background conditions. Generally, in the proposed segmentation algorithm, five segmentation steps were used. These steps include: 1. Using a suitable color model; 2. Using the appropriate texture feature; 3. Using the intensity transformation method; 4. Using morphological operators; and 5. Using di erent color thresholds. The results showed that the segmentation algorithm had the total correct detection percentage of 99.013%. The highest sensitivity and specificity of segmentation algorithm were 99.224 and 99.458%, respectively. Finally, the results showed that the processor speed was about 0.825 seconds for segmentation of a frame. Keywords: texture features; color features; precision farming; machine vision; segmentation; video processing 1. Introduction Performing segmentation operations in accordance with the desired purpose has di erent complexities. In principle, segmentation operations in agriculture and horticulture are more complex than other sectors. This complexity is because of crowded backgrounds with various objects. In applications such as site-specific spraying and combat weeds, segmentation is the first step in the design of machine vision systems [1–4]. The segmentation involves various steps depending on the complexity of the background of the image and may include a combination of several methods. Therefore, the programmer skill is very important in this field. Generally, the conventional methods of Agriculture 2019, 9, 104; doi:10.3390/agriculture9050104 www.mdpi.com/journal/agriculture Agriculture 2019, 9, 104 2 of 14 segmentation are color index-based segmentation, threshold-based segmentation and learning-based segmentation [5] that in this research we used threshold-based segmentation method as one method that combine with other method. Bai et al. [6] believed that segmentation of vegetation cover from field images is a necessary issue. For this reason, they proposed a new segmentation method based on particle swarm optimization clustering and morphological modeling in a color space of L*a*b*. In this regard, they captured images at 10, 12 and 14 o’clock. The proposed method has two stages of oine learning and online segmentation. In the oine learning process, the number of optimized clusters was determined based on the training sample set. In the second step, each pixel was classified as vegetation class or non-vegetation class. In the last step, 200 images were used to test the proposed system. The results showed that the average segmentation quality of the images was 88.1% to 91.7%. Because of the successive droughts in the world and the consequent reduction in groundwater levels, as well as due to increased population, the need for agricultural water management is quite felt. One way to deal with water shortage is to use water only in areas where the crop has been cultivated, because it will only be consumed by the product and thus the amount of waste of water will be minimized. In large-scale lands, the possibility to achieve this without the use of new technologies is almost zero. For this reason, it is possible to detect places with crops using machine vision systems. In this regard Hernandez et al. [7] believed that in order to achieve precision agriculture goals, not only new technologies but also software development were necessary. Therefore, they provided a new application of the machine vision system in the form of an automated segmentation system of plants from background to monitor the cabbage during growth, to provide product information that is needed to estimate the amount of water required for the crop. Their proposed system consisted of three main steps: 1) imaging and cutting, 2) analyzing the image, and 3) recording information. In order to imaging, wooden frames were installed inside the farm, and then the images were taken. In order to train the proposed system, 1106 cabbage samples were used. The results showed that the proposed system had a 20% error in counting the cabbage. In another study Tang et al. [8] provided a multi-inference trees segmentation method to better manage farms. The algorithm worked out based on image features and user requirements. In fact, the algorithm learns the related rules according to these two, and then applies the color space, the transformation to the gray image method, the de-noising method, the local segmentation method, and the morphology processing method after applying each rule. To train the proposed algorithm, 2082 images with a resolution of 1932  2576 were captured. A manual method was also used to evaluate intelligent results. The results showed that the intelligent processing assessment rate for more than 80 points was above 83% with an average of 75%. The processing time of each image needs about 23 seconds. Liu et al. [9] developed a machine vision system for segmentation of apple fruit based on color and position information. This system performs segmentation based on artificial light with low brightness. The proposed method had two main steps: the first step included the training of artificial neural network using RGB and HIS color space components and finally, it included the providing a model for apple fruit segmentation. Due to the presence of shadows in some parts of the apple (due to the no uniformity of light), the segment is not performed properly in this step, therefore, in order to complete the segmentation, a second stage, which also considers the color and position of the pixels surrounding the segmented area, is added. In their study, 20 apple fruits were used. The results showed that the proposed system has acceptable results in the segmentation of these apples. As it was observed, the research focused on segmentation in two simple background states, such as separating the plant from the soil or having a segment on the artificial background. On the other hand, all research focused on analyzing the images taken. To formulate an appropriate segmentation algorithm for apple fruits under natural light conditions and a completely natural background in the presence of various objects such as leaves of tree, thin branches, thick branches, tree trunk, blue sky, cloudy sky, green plants, yellow plants, harvested fruits, and baskets, there is no possibility to use the results of researches done because of two reasons: 1. backgrounds are very complex and have di erent Agriculture 2019, 9, 104 3 of 14 objects with di erent colors. 2. Camera movement in gardens needs for di erent operations, such as site-specific spraying, for this reason, the frames have no good quality. Therefore, the purpose of this study is to develop a segmentation algorithm for working in a completely natural environment both in terms of light and in terms of backgrounds using video processing, with emphasis on the segmentation of apples on trees. In the last years horticulture has been one of the most important research objects in many universities in the world. We have found that the main works are directed to the recognition of fruits, counting, detection of plants, monitoring of irrigation, etc. 2. Materials and Methods Each machine or computer vision system for development needs to have di erent stages, such as the filming stage, the stage of analysis, and so on. In this study, as same as other machine vision system, the steps have been designed to train the system. Some of these steps can be filming, examining di erent color models, extracting various texture features, employing di erent morphological operators and using the intensity transformation method. 2.1. Data Collection In this study, a digital camera (DFK 23GM021 specification, CMOS, 120 f/s, Imaging Source GmbH, Bremen, Germany) was used to filming from apple gardens in Kermanshah province-Iran. In Table 1, shows the related details of one of the videos from these gardens. As observed, there is a video more than 12 minutes that contains more than 22,000 frames that has been recorded in di erent days, times of the day, and weather conditions. Since the capability of performing of the segmentation system in di erent light intensities is an essential principle, the video was provided in full natural light conditions throughout the day with a completely natural background. Some light intensities were 398 lux, 1096 lux, 692 lux, 1591 lux, 1923 lux, 894 lux, 2010 lux, 918 lux, 798 lux, 493 lux and 579 lux. Table 1. Characteristics of video studied. Number Parameter Time/Number 1 Filming time More than 12 minutes 2 Extracted frames 22,001 3 Training frames 15,401 (70% of all frames) 4 Testing frames 6,600 (30% all frames) 5 Background objects in test mode 60,125 6 Number of apples in test mode 42,750 We collected several films of orchards, but we used only 12 min (22,001 frames) of them for the training of the algorithm. We had filming in four stages of ripening, including Unripe (20 days before maturity), Half-ripe (10 days before maturity), Ripe and Overripe (10 days after maturity), that combined them for the training of algorithm. The distance from the trees was between 0.5 and 2 m, the speed was around 1 m/s, and the viewing angle was nearly parallel to the ground. The camera was manually held, simulating a low-medium height flight of a drone. With the system described, which has a horizontal viewing angle around 80o an apple about 7 cm would be observed with a size of 20 pixels at a distance about 3 m over the trees. Apple variety was Malus Domestica L., var. Red Delicious. We had filming in 4 stages of ripening include Unripe, Half-ripe, Ripe and Overripe. 2.2. Various Color Models An image in di erent color models has di erent colors. In fact, di erent objects in one image, in each color model, will have di erent colors. This feature can be used to distinguish between di erent Agriculture 2019, 9, 104 4 of 14 background objects and apples. For this investigation, 17 color spaces were investigated [10,11], which are shown in Table 2. Table 2. Various color models examined. Number Color Model Number Color Model Number Color Model 1 RGB 7 HSI 13 YPbPr 2 HSV 8 Improved YCbCr 14 YUV 3 YIQ 9 L*a*b* 15 HSL 4 YCbCr 10 JPEG-YCbCr 16 XYZ 5 CMY 11 YDbDr 17 Luv 6 LCH 12 CAT02 LMS 2.3. Extraction of Texture Features Intuitively, the texture of a region can be described by roughness and softness. In fact, di erent regions in one image can be formed from a very rough to very soft modes. Mathematically, there are several methods for describing the texture. One of these methods is the texture features based on the gray level co-occurrence matrix (GLCM) extracted from the position of the pixels with same values. In fact, this method presents an average of the entire area in which the texture is examined. Therefore, this study is not applicable because here it is necessary to examine the texture of all pixels. Another method is to measure the spectral range of the texture based on the Furrier spectrum. This spectrum describes periodic or nearly periodic two-dimensional patterns in an image. The Furrier spectrum performs spectral measurement in a polar coordinate system (i.e., based on radius and angle), since spectral properties are interpreted by describing the spectrum in polar coordinates as a simple function of S (r, ). In this function S is the spectral function and r and  are variables of the polar system. Therefore, the function of S (r, ) can be considered as two one-dimensional functions of S(r) and Sr() for each direction  and each frequency r. S(r) for the constant values of  shows the behavior of the spectrum along the radius, while Sr() for the constant values of r shows the behavior of the spectrum along a circle with the center of origin [10]. This method, like the previous method, provides the mean value for the entire area. In the third method, textural descriptors are applied to the entire image pixels, and finally, the results are also observed intuitively. Therefore, in this study, the texture features of local entropy, local standard deviation and local range were investigated. 2.4. Application of Morphological Operators Outdoor operations under natural light with complex backgrounds are particularly sensitive, as unpredictable noise and e ects can make it dicult to achieve the desired goal. One of the most important methods for removing these noise and unpredicted factors is the use of morphological operators. These operators include a wide range of operators, such as opening, closing, filling holes, deleting border pixels, removing objects with pixels less than threshold values, thinning, thickening, and others. In the proposed segmentation algorithm, opening, closing, filling holes and deleting objects with a number of pixels less than 100 were used at di erent stages. This threshold value was selected with trial and error and with the consideration of not removing the apple pixels. The process of mathematical morphology in computational terms, consists of moving all the pixels of the image from left to right and from top to bottom in order to find isolated pixels, which are considered noise [12]. This noise is eliminated by applying erosion and dilation with the following equations: Open = (B E)  E (1) Close = (B  E) E (2) The operation open, disappears the fine points or fine structures and the operation close, fill the black holes of a certain size. Agriculture 2019, 9, 104 5 of 14 2.5. The Importance of Using Intensity Transformation Agriculture 2019, 9, x FOR PEER REVIEW 5 of 14 In segmentation, we are looking for methods that eliminate background objects and prevent the In segmentation, we are looking for methods that eliminate background objects and prevent the removal of target object pixels. The use of the intensity transformation method by limiting the pixel removal of target object pixels. The use of the intensity transformation method by limiting the pixel intensity variation in the desired range provides more di erences between di erent objects. Therefore, intensity variation in the desired range provides more differences between different objects. in this study, a part of the segmentation operation was performed by changing the intensity from 0 Therefore, in this study, a part of the segmentation operation was performed by changing the and 1 to 0 and 0.6 with applying the threshold of 95. Since image were in uint8 data class pixels were intensity from 0 and 1 to 0 and 0.6 with applying the threshold of 95. Since image were in uint8 data multiplied in 225. class pixels were multiplied in 225. 2.6. Di erent Stages in the Elaboration of Segmentation Algorithm 2.6. Different Stages in the Elaboration of Segmentation Algorithm In Figure 1 shows the main steps in creating a segmentation algorithm. As observed, there are 11 In figure 1 shows the main steps in creating a segmentation algorithm. As observed, there are 11 main stages in this algorithm. main stages in this algorithm. Figure 1. Di erent stages in the development of segmentation algorithm. Figure 1. Different stages in the development of segmentation algorithm. 3. Results and Discussion 3. Results and Discussion 3.1. The Most Suitable Color Model for the First Stage of Segmentation 3.1. The Most Suitable Color Model for the First Stage of Segmentation In Figure 2 shows a sample image in six di erent color models. As observed, objects in di erent In figure 2 shows a sample image in six different color models. As observed, objects in different color models have di erent colors. The most suitable color space for segmentation is the color space color models have different colors. The most suitable color space for segmentation is the color space with a minimum number of colors and the display of all the objects in the image, because there is the with a minimum number of colors and the display of all the objects in the image, because there is the possibility of using threshold or thresholds with a very high accuracy. These images show that the possibility of using threshold or thresholds with a very high accuracy. These images show that the worst color model is the LCH because it shows almost all the objects of the image in white. Other worst color model is the LCH because it shows almost all the objects of the image in white. Other color models expect Luv have shown di erent objects with a large number of colors, which causes color models expect Luv have shown different objects with a large number of colors, which causes applying the threshold be dicult. The Luv color model has been able to represent various objects in applying the threshold be difficult. The Luv color model has been able to represent various objects in the image with almost three colors. In fact, in this image, the leaves are shown in purple color, which the image with almost three colors. In fact, in this image, the leaves are shown in purple color, which led to performing a part of the segmentation based on this feature. Finally, using trial and error, it was led to performing a part of the segmentation based on this feature. Finally, using trial and error, it was determined that if all the pixel components in the image of the Luv color model are more than 115, then those pixels are related to the background and should be deleted. Agriculture 2019, 9, x FOR PEER REVIEW 6 of 14 3.2. Texture Feature with High Performance in the Second Segmentation Process Figure 3 illustrates the results of applying the three features of texture of local range, local entropy and local standard deviations. As observed, the images extracted from the local range and local standard deviation methods are very similar expect that the edges of the objects in the image resulted from the local area is darker. The images from these two methods represent more objects compared with the local entropy method. Therefore, finally, the image resulted from applying the Agriculture 2019, 9, 104 6 of 14 texture feature of local range was studied as the target image to apply another step of the segmentation. In fact, this image converted into a binary image, and then segmentation was determined that if all the pixel components in the image of the Luv color model are more than 115, performed by applying the threshold 1. This threshold means that if the image pixels have a value then those pixels are related to the background and should be deleted. equal to 1, those pixels belong to the background and should be deleted. Figure 2. Sample image in six di erent color models. (a): RGB color model, (b): Improvement YCbCr color model, (c): LCH color model, (d): HSL color model, (e): HSI color model, (f): Luv color model. Figure 2. Sample image in six different color models. (a): RGB color model, (b): Improvement YCbCr color model, (c): LCH color model, (d): HSL color model, (e): HSI color model, (f): Luv color model. 3.2. Texture Feature with High Performance in the Second Segmentation Process Figure 3 illustrates the results of applying the three features of texture of local range, local entropy and local standard deviations. As observed, the images extracted from the local range and local standard deviation methods are very similar expect that the edges of the objects in the image resulted from the local area is darker. The images from these two methods represent more objects compared with the local entropy method. Therefore, finally, the image resulted from applying the texture feature of local range was studied as the target image to apply another step of the segmentation. In fact, this image converted into a binary image, and then segmentation was performed by applying the threshold 1. This threshold means that if the image pixels have a value equal to 1, those pixels belong to the background and should be deleted. Agriculture 2019, 9, 104 7 of 14 Agriculture 2019, 9, x FOR PEER REVIEW 7 of 14 Figure 3. Texture feature with high performance in the second segmentation process. (a): The original Figure 3. Texture feature with high performance in the second segmentation process. (a): The original image, (b): The image obtained by applying the local range feature, (c): The image obtained by applying image, (b): The image obtained by applying the local range feature, (c): The image obtained by the local entropy feature, (d): The image obtained by applying the local standard deviation feature. applying the local entropy feature, (d): The image obtained by applying the local standard deviation feature. 3.3. Intensity Transformation Performance in the Third Step of Segmentation Figure 4 shows an image of the intensity transformation performance. Figure 4a shows the main 3.3. Intensity Transformation Performance in the Third Step of Segmentation studied image. As observed, this image has various objects such as green leaves in the shade, green Figure 4 shows an image of the intensity transformation performance. Figure 4a shows the main leaves in the sun, soil, green plants in the shade, green plants in the sun, tiny branches, thick branches, studied image. As observed, this image has various objects such as green leaves in the shade, green tree trunks, and others. Figure 4b shows the image segmented in the two previous steps by color and leaves in the sun, soil, green plants in the shade, green plants in the sun, tiny branches, thick branches, texture methods. As observed, most of the relevant branches and trunks of trees remained without any tree trunks, and others. Figure 4b shows the image segmented in the two previous steps by color and change. Figure 4c shows the image of the intensity transformation. Eventually, by applying thresholds texture methods. As observed, most of the relevant branches and trunks of trees remained without 95 on image Figure 4c, the image shown in Figure 4d was obtained. By comparing this image and the any change. Figure 4c shows the image of the intensity transformation. Eventually, by applying image of Figure 4b, it is clear that many parts of the trunk and branches have been deleted. thresholds 95 on image Figure 4c, the image shown in Figure 4d was obtained. By comparing this 3.4. The Performance of Segmentation Algorithm in Di erent Modes of Ordering Color, Texture, Intensity image and the image of Figure 4(b), it is clear that many parts of the trunk and branches have been Transformation Methods deleted. One of the innovations of this research is to arrange the sequence of di erent segmentation 3.4. The Performance of Segmentation Algorithm in Different Modes of Ordering Color, Texture, Intensity methods. Figure 5 shows three di erent sequences of texture, color, and intensity transformation Transformation Methods methods. Figure 5a shows the original image. Figure 5b shows the image segmented before applying the color thresholds with the sequence of the texture method, the color method and the intensity One of the innovations of this research is to arrange the sequence of different segmentation transformation method. As observed, the segmentation accuracy is very low, and many of the relevant methods. Figure 5 shows three different sequences of texture, color, and intensity transformation apple segments have been deleted, while many background pixels are remained. Figure 5c shows the methods. Figure 5a shows the original image. Figure 5b shows the image segmented before applying segmented image of Figure 5a with the sequence of the intensity transformation method, the texture the color thresholds with the sequence of the texture method, the color method and the intensity method and the color method. This sequence of methods has better performance than the previous transformation method. As observed, the segmentation accuracy is very low, and many of the one, but in general, it has a low accuracy. Figure 5d shows the segmented image of the segmentation relevant apple segments have been deleted, while many background pixels are remained. Figure 5c algorithm with the sequence of the color method, the texture method and the intensity transformation shows the segmented image of Figure 5a with the sequence of the intensity transformation method, method. As observed, the algorithm has a very good performance. In fact, by using this sequence, a the texture method and the color method. This sequence of methods has better performance than the large part of the background was removed and the pixels of the apple were not deleted. previous one, but in general, it has a low accuracy. Figure 5d shows the segmented image of the segmentation algorithm with the sequence of the color method, the texture method and the intensity transformation method. As observed, the algorithm has a very good performance. In fact, by using this sequence, a large part of the background was removed and the pixels of the apple were not deleted. Agriculture 2019, 9, x FOR PEER REVIEW 8 of 14 Agriculture 2019, 9, 104 8 of 14 Agriculture 2019, 9, x FOR PEER REVIEW 8 of 14 Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): Figure 4. Intensity transformation performance in the third step of segmentation. (a): Original image, (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): Image Image segmented after applying the threshold on the image of the intensity transformation. (b): Image segmented before this step, (c): Image corresponding to intensity transformation, (d): segmented after applying the threshold on the image of the intensity transformation. Image segmented after applying the threshold on the image of the intensity transformation. Figure 5. The performance of segmentation algorithm in di erent modes of ordering color, texture, Figure 5. The performance of segmentation algorithm in different modes of ordering color, texture, intensity transformation methods. (a) Original image. (b) The resulting image after applying the intensity transformation methods. (a) Original image. (b) The resulting image after applying the sequence of the texture method, the color method and the intensity transformation method. (c) The Figure 5. The performance of segmentation algorithm in different modes of ordering color, texture, sequence of the texture method, the color method and the intensity transformation method. (c) The resulting image after applying the sequence of the intensity transformation method, the texture method, intensity transformation methods. (a) Original image. (b) The resulting image after applying the resulting image after applying the sequence of the intensity transformation method, the texture the color method. (d) The image obtained after applying the sequence of the color method, the texture sequence of the texture method, the color method and the intensity transformation method. (c) The method, the color method. (d) The image obtained after applying the sequence of the color method, method, the intensity transformation method. resulting image after applying the sequence of the intensity transformation method, the texture the texture method, the intensity transformation method. method, the color method. (d) The image obtained after applying the sequence of the color method, the texture method, the intensity transformation method. 3.5. Applying Thresholding Function to Complete the Segmentation Process 3.5. Applying Thresholding Function to Complete the Segmentation Process Agriculture 2019, 9, 104 9 of 14 3.5. Applying Thresholding Function to Complete the Segmentation Process After completing the first part of the segmentation, which involves applying thresholds using di erent methods and the sequence of the methods in the segmentation algorithm, it is necessary to implement the second part of the segmentation to complete the segmentation process due to the presence of small objects in the background. Due to the sensitivity of the work, in this study, a thresholding function related to RGB color space channels were used for final segmentation in a comprehensive segmentation algorithm by exact study of frames and considerations so as to not remove apple pixels. After survey di erent images in light di erent conditions such as shadow and sunny modes as well as various objects on the trees, 10-color threshold for function training were select. In fact, this function is based on pixel. Each pixel survey individually and the values of RGB color space components compare with 10-color threshold. This function has two outputs: 0 and 1. When the output is 0, it means that pixel is related to background and when the output is 1 it means that pixel is related to apples. These thresholds have been shown in Table 3. Figure 6 shows two sample images for displaying the performance of a number of thresholds. The target objects have been shown with bold blue lines. Other objects that are left in the images and not in the right image were removed by other thresholds. Table 3. Di erent thresholds to remove background pixels remaining from previous steps. Number Thresholds 1 FR(i,j)>90 & FR(i,j)<=110 & FG(i,j)>55 & FG(i,j)<80 & FB(i,j)>40 & FB(i,j)<62 & abs(FG(i,j) FB(i,j))<20; 2 FR(i,j)>92 & FR(i,j)<=102 & FG(i,j)>82 & FG(i,j)<94 & FB(i,j)>20 & FB(i,j)<35 & abs(FR(i,j) FG(i,j))<15; 3 FR(i,j)>115 & FR(i,j)<=130 & FG(i,j)>100 & FG(i,j)<115 & FB(i,j)>49 & FB(i,j)<55 & abs(FR(i,j) FG(i,j))<20; 4 FR(i,j)>102 & FR(i,j)<=125 & FG(i,j)>85 & FG(i,j)<105 & FB(i,j)>35 & FB(i,j)<60 & abs(FR(i,j) FG(i,j))<25; 5 FR(i,j)>98 & FR(i,j)<=108 & FG(i,j)>82 & FG(i,j)<90 & FB(i,j)>22 & FB(i,j)<38 & abs(FR(i,j) FG(i,j))<25; 6 FR(i,j)>120 & FR(i,j)<=128 & FG(i,j)>110 & FG(i,j)<118 & FB(i,j)>40 & FB(i,j)<55 & abs(FR(i,j) FG(i,j))<15; 7 FR(i,j)>190 & FR(i,j)<=202 & FG(i,j)>179 & FG(i,j)<190 & FB(i,j)>48 & FB(i,j)<53 & abs(FR(i,j) FG(i,j))<20; 8 FR(i,j)>100 & FR(i,j)<=110 & FG(i,j)>75 & FG(i,j)<90 & FB(i,j)>68 & FB(i,j)<82 & abs(FG(i,j) FB(i,j))<15; 9 FR(i,j)>=142 & FR(i,j)<=167 & FG(i,j)>=120 & FG(i,j)<139 & FB(i,j)>=67 & FB(i,j)<97 & abs(FR(i,j) FG(i,j))<30; Agriculture 10 2019FR(i,j) , 9, x FO >=R P 95 & EER FR(i,j) RE<= VIEW 115 & FG(i,j)>=49 & FG(i,j)<70 & FB(i,j)>=25 & FB(i,j)<50 & abs(FG(i,j) FB(i,j))<25; 10 of 14 Figure 6. Applying di erent color thresholds to complete the segmentation process, (a): The image Figure 6. Applying different color thresholds to complete the segmentation process, (a): The image before applying threshold, (b): The image after applying threshold 6 in Table 3, (c): The image before before applying threshold, (b): The image after applying threshold 6 in Table 3, (c): The image before applying threshold, (d): The image after applying threshold 5 in Table 3. applying threshold, (d): The image after applying threshold 5 in Table 3. 3.6. Accuracy of Comprehensive Segmentation Algorithm Table 4 shows the average percentage of background pixels removed by each segmentation method in comprehensive segmentation algorithm. This table shows that the highest percentage of background pixels removed is related to the method of threshold in Luv color space with value 36%. As this table shows, no single method can do segmentation operations alone. Therefor a combination of different method is need for segmentation with high accuracy. The combination of different segmentation techniques and their arrangement can be considered as an innovation. Table 5 shows the confusion matrix of thresholding function. As this table shows the error of this segmentation method is less than 0.8. Table 6 shows the confusion matrix and the percentage of the total detection of the proposed segmentation algorithm. As observed, objects in the images are divided into two classes of apple and background objects. This table shows that 324 samples out of 42,750 apple samples are mistakenly located in the background objects class by the segmentation algorithm, so the segmentation algorithm has 0.758% error in this class. This algorithm also mistakenly classified 691 samples of the objects in the background with the total members of 60125 in the apple class. This leads to a 1.15% error in segmentation algorithm for this class. Finally, the percentage of total detection of the segmentation algorithm is 99.013%. This accuracy is very good for this sample number, which proves the algorithm was configured properly. Table 4. The average percentage of background pixels removed by each segmentation method. The Average Percentage of Background Pixels Removed by Main Segmentation Methods Each Method The use of threshold in Luv color space The use of texture feature 26 The use of morphological operators Agriculture 2019, 9, 104 10 of 14 3.6. Accuracy of Comprehensive Segmentation Algorithm Table 4 shows the average percentage of background pixels removed by each segmentation method in comprehensive segmentation algorithm. This table shows that the highest percentage of background pixels removed is related to the method of threshold in Luv color space with value 36%. As this table shows, no single method can do segmentation operations alone. Therefor a combination of di erent method is need for segmentation with high accuracy. The combination of di erent segmentation techniques and their arrangement can be considered as an innovation. Table 5 shows the confusion matrix of thresholding function. As this table shows the error of this segmentation method is less than 0.8. Table 4. The average percentage of background pixels removed by each segmentation method. The Average Percentage of Background Pixels Main Segmentation Methods Removed by Each Method The use of threshold in Luv color space 36 The use of texture feature 26 The use of morphological operators 23 The use of thresholding function 15 Table 5. Confusion matrix of thresholding function. Classes correspond to: 1: Apple object pixels, 2: Background object pixels. Predicted/Real Classification Error Classification 1 2 All Data Class by Class (%) Accuracy (%) 1 52,139 429 52,568 0.816 99.20% 2 389 49,123 49,512 0.785 Table 6 shows the confusion matrix and the percentage of the total detection of the proposed segmentation algorithm. As observed, objects in the images are divided into two classes of apple and background objects. This table shows that 324 samples out of 42,750 apple samples are mistakenly located in the background objects class by the segmentation algorithm, so the segmentation algorithm has 0.758% error in this class. This algorithm also mistakenly classified 691 samples of the objects in the background with the total members of 60125 in the apple class. This leads to a 1.15% error in segmentation algorithm for this class. Finally, the percentage of total detection of the segmentation algorithm is 99.013%. This accuracy is very good for this sample number, which proves the algorithm was configured properly. Table 6. Confusion matrix and total percentage of proposed segmentation algorithm. Background Total Percentage of Total Percentage of Class Apples All Data Objects Wrong Diagnosis Correct Diagnosis Apples 42,426 324 42,750 0.758 99.013% Background objects 691 59,434 60,125 1.15 3.7. Performance of Segmentation Algorithm To evaluate the performance of segmentation algorithm, three criteria of sensitivity, specificity and accuracy were used. Based on definition, sensitivity expresses the wrong placement of the samples of the studied class and the specificity expresses the wrong placement of the other class samples in the studied class. Finally, the accuracy is the percentage of total placement of the correct samples in their classes. These three criteria are expressed using Equations 1 to 3. TP Sensitivity = (3) TP + FN Agriculture 2019, 9, 104 11 of 14 TN Specificity = (4) FP + TN TP + TN Accuracy = (5) TP + TN + FP + FN TP is the number of samples of each class that are correctly classified; TN is the number of samples on the main diameter of the confusion matrix minus the number of samples of the studied classes. FN is the sum of the horizontal samples of the class examined minus the number of samples of the studied class. Finally, FP is the sum of the vertical samples of the studied class minus the number of samples of the same class [13]. In Table 7 shows the results of the performance criteria of segmentation algorithm. Based on this table, it is shown that the highest sensitivity is related to the apple class with the value of 99.242 percent and the highest specificity is related to the class of background objects with a value of 99.458 percent. In Figure 7 shows a pseudo code of segmentation algorithm. In pseudo code explain final segmentation algorithm in 13 stages. Table 7. Results of performance criteria of segmentation algorithm. Class Sensitivity Accuracy Specificity Apples 99.242 99.013 98.397 Background objects 98.851 99.013 99.458 Agriculture 2019, 9, x FOR PEER REVIEW 12 of 14 Figure 7. A pseudo code of final segmentation algorithm. Figure 7. A pseudo code of final segmentation algorithm. 3.8. The Speed of the Segmentation Algorithm The system used to analyze, run the segmentation algorithm and detect background objects and apples was a laptop with Intel Core i3 CFI processor, 330M at 2.13 GHz, 4 GB of RAM-4GB and Windows 10. The results showed the processor speed was about 0.825 seconds for the segmentation of a frame. This speed is very good for this research because the background of the frames was very complex and full of different objects. After presenting the segmentation algorithm and reviewing the performance of the algorithm, it is necessary to compare the results with the results of other researchers. Due to the novelty of the proposed method as well as the different filming conditions, there is no possibility for direct comparison of the results. However, two studies, conducted by Zhao et al. [14] and Aquino et al. [15] were used to compare the proposed method. Zhao et al. [14] provided a method for detecting immature green citrus in citrus gardens. Aquino et al. [15] proposed a segmentation-based method for counting the number of grape cubes associated with a cluster in color images and in controlled light conditions. The results of this study are shown in Table 8. As Table 8 shows, the proposed method in this study with a higher number of samples than the other two studies has a higher detection rate. After comparing with other research, we mention advantages of the proposed method. The advantages are: 1. High process speed; 2. High accuracy; 3. Usability in natural conditions of the orchard; 4. Usability in different orchards; 5. Usability in segmentation of different fruit within trees in orchard. This algorithm can be used for different purposes such as: 1. The use in fruit picking Agriculture 2019, 9, 104 12 of 14 3.8. The Speed of the Segmentation Algorithm The system used to analyze, run the segmentation algorithm and detect background objects and apples was a laptop with Intel Core i3 CFI processor, 330M at 2.13 GHz, 4 GB of RAM-4GB and Windows 10. The results showed the processor speed was about 0.825 seconds for the segmentation of a frame. This speed is very good for this research because the background of the frames was very complex and full of di erent objects. After presenting the segmentation algorithm and reviewing the performance of the algorithm, it is necessary to compare the results with the results of other researchers. Due to the novelty of the proposed method as well as the di erent filming conditions, there is no possibility for direct comparison of the results. However, two studies, conducted by Zhao et al. [14] and Aquino et al. [15] were used to compare the proposed method. Zhao et al. [14] provided a method for detecting immature green citrus in citrus gardens. Aquino et al. [15] proposed a segmentation-based method for counting the number of grape cubes associated with a cluster in color images and in controlled light conditions. The results of this study are shown in Table 8. As Table 8 shows, the proposed method in this study with a higher number of samples than the other two studies has a higher detection rate. Table 8. Comparison of the results obtained in this study with two other studies. Method Number of Samples Correct Detection Rate (Percent) Proposed method 102,875 (test data) 99.013 [14] 68 83 [15] 152 95.72 After comparing with other research, we mention advantages of the proposed method. The advantages are: 1. High process speed; 2. High accuracy; 3. Usability in natural conditions of the orchard; 4. Usability in di erent orchards; 5. Usability in segmentation of di erent fruit within trees in orchard. This algorithm can be used for di erent purposes such as: 1. The use in fruit picking robots with emphasis on apple fruit; 2. The use in automatic systems to estimation of fruit yields with emphasis on apple fruit; 3. The use in automatic systems to survey fruits in grow stags with emphasis on apple fruit. 4. Conclusions In this study, a new method was developed for segmentation of apple fruits on trees under natural light conditions without using any artificial background with emphasis on video processing. The most important results are: 1. The most important challenges for the development of segmentation algorithm were the presence of di erent objects with di erent colors in a background. For example, a number of these objects include trees trunk in the shade, trunks of trees in the sun, tiny brunches in the shade, tiny brunches in the sun, tiny branches connected to trunks, green leaves in the sun, green leaves in the shade, pestle leaves, green plants, yellow plants, cloudy sky, sunny sky, artificial objects such as nylon, baskets, harvested apples, flakes and so on. 2. Appropriate color model among 17 color models examined was Luv. In fact, this model eliminates many leaves in the first stage. 3. The proper feature for performing the second stage of segmentation among the three texture features of local range, local entropy and local standard deviation was the local range. 4. The use of the intensity transformation method eliminated a large part of the pixels related to the trunk and tree branches. 5. The use of morphological operators in di erent stages of segmentation is necessary. 6. The use of color thresholds in the final stage of segmentation eliminates objects that have remained in the previous stages. Agriculture 2019, 9, 104 13 of 14 7. Results showed that the percentage of total detection of segmentation algorithm was 99.013%. 8. The highest sensitivity was related to apple class with the value of 99.242% and the highest specificity was related to the class of background objects with a value of 99.458%. 9. The results showed that the processor speed was about 0.825 seconds for the segmentation of a frame. For future work, a fruit recognition system should be implemented and vegetables to improve recognition functionality and flexibility for wider use. The process should be improved by extending its functions to process and recognize more variety of di erent fruit images. Besides that, a texture-based analysis technique could be combined with the existing three features analysis technique on the system in order to gain better discerning of di erent fruit images. Author Contributions: Conceptualization, S.S. and Y.A.-G.; methodology, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; software, S.S.; validation, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; formal analysis, S.S., and J.L.H.-H.; investigation, S.S., Y.A.-G., F.A., R.K.; and J.L.H.-H.; resources, S.S.; data curation, S.S.; writing—original draft preparation, S.S.; writing—review and editing, J.L.H.-H.; visualization, S.S.; supervision, Y.A.-G.; project administration, Y.A.-G.; funding acquisition, Y.A.-G. Funding: This study was financially supported by Iran National Science Foundation (INSF) through the research project 96007466. Conflicts of Interest: The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. References 1. Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support Vector Machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012, 39, 11149–11155. [CrossRef] 2. Montalvo, M.; Guerrero, J.M.; Romeo, J.; Emmi, L.; Guijarro, M.; Pajares, G. Automatic expert system for weeds/crops identification in images from maize fields. Expert Syst. Appl. 2013, 40, 75–82. [CrossRef] 3. Romeo, J.; Guerrero, J.M.; Montalvo, M.; Emmi, L.; Guijarro, M.; Gonzalez-De-Santos, P.; Pajares, G. Camera Sensor Arrangement for Crop/Weed Detection Accuracy in Agronomic Images. Sensors 2013, 13, 4348–4366. [CrossRef] [PubMed] 4. Arroyo, J.; Guijarro, M.; Pajares, G. An instance-based learning approach for thresholding in crop images under di erent outdoor conditions. Comput. Electron. Agric. 2016, 127, 669–679. [CrossRef] 5. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [CrossRef] 6. Bai, X.; Cao, Z.; Wang, Y.; Yu, Z.; Hu, Z.; Zhang, X.; Li, C. Vegetation segmentation robust to illumination variations based on clustering and morphology modelling. Biosyst. Eng. 2014, 125, 80–97. [CrossRef] 7. Hernández-Hernández, J.; Ruiz-Hernández, J.; García-Mateos, G.; González-Esquiva, J.; Ruiz-Canales, A.; Molina-Martínez, J. A new portable application for automatic segmentation of plants in agriculture. Agric. Water Manag. 2017, 183, 146–157. [CrossRef] 8. Tang, J.; Miao, R.; Zhang, Z.; He, D.; Liu, L. Decision support of farmland intelligent image processing based on multi-inference trees. Comput. Electron. Agric. 2015, 117, 49–56. [CrossRef] 9. Liu, X.; Zhao, D.; Jia, W.; Ruan, C.; Tang, S.; Shen, T. A method of segmenting apples at night based on color and position information. Comput. Electron. Agric. 2016, 122, 118–123. [CrossRef] 10. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB; Prentice Hall: Upper Saddle River, NJ, USA, 2004. 11. Hernández-Hernández, J.; García-Mateos, G.; González-Esquiva, J.; Escarabajal-Henarejos, D.; Ruiz-Canales, A.; Molina-Martínez, J. Optimal color space selection method for plant/soil segmentation in agriculture. Comput. Electron. Agric. 2016, 122, 124–132. [CrossRef] 12. Li, Y.; Zuo, M.J.; Lin, J.; Liu, J. Fault detection method for railway wheel flat using an adaptive multiscale morphological filter. Mech. Syst. Signal Process. 2017, 84, 642–658. [CrossRef] Agriculture 2019, 9, 104 14 of 14 13. Wisaeng, K. A comparison of decision tree algorithms for UCI repository classification. Int. J. Eng. Trends Technol. 2013, 4, 3397–3401. 14. Zhao, C.; Lee, W.S.; He, D. Immature green citrus detection based on colour feature and sum of absolute transformed di erence (SATD) using colour images in the citrus grove. Comput. Electron. Agric. 2016, 124, 243–253. [CrossRef] 15. Aquino, A.; Diago, M.P.; Millán, B.; Tardáguila, J. A new methodology for estimating the grapevine-berry number per cluster using image analysis. Biosyst. Eng. 2017, 156, 80–95. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Journal

AgricultureMultidisciplinary Digital Publishing Institute

Published: May 9, 2019

There are no references for this article.