Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Colored 3D Path Extraction Based on Depth-RGB Sensor for Welding Robot Trajectory Generation

Colored 3D Path Extraction Based on Depth-RGB Sensor for Welding Robot Trajectory Generation Article Colored 3D Path Extraction Based on Depth-RGB Sensor for Welding Robot Trajectory Generation Alfonso Gómez-Espinosa * , Jesús B. Rodríguez-Suárez , Enrique Cuan-Urquizo , Jesús Arturo Escobedo Cabello and Rick L. Swenson Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Querétaro 76130, Mexico; jbraian.rguez@gmail.com (J.B.R.-S.); ecuanurqui@tec.mx (E.C.-U.); arturo.escobedo@tec.mx (J.A.E.C.); rswenson@tec.mx (R.L.S.) * Correspondence: agomeze@tec.mx; Tel.: +52-(442)-238-3302 Abstract: The necessity for intelligent welding robots that meet the demand in real industrial production, according to the objectives of Industry 4.0, has been supported owing to the rapid development of computer vision and the use of new technologies. To improve the efficiency in weld location for industrial robots, this work focuses on trajectory extraction based on color features identification on three-dimensional surfaces acquired with a depth-RGB sensor. The system is planned to be used with a low-cost Intel RealSense D435 sensor for the reconstruction of 3D models based on stereo vision and the built-in color sensor to quickly identify the objective trajectory, since the parts to be welded are previously marked with different colors, indicating the locations of the welding trajectories to be followed. This work focuses on 3D color segmentation with which the Citation: Gómez-Espinosa, A.; points of the target trajectory are segmented by color thresholds in HSV color space and a spline Rodríguez-Suárez, J.B.; Cuan-Urquizo, E.; Cabello, J.A.E.; cubic interpolation algorithm is implemented to obtain a smooth trajectory. Experimental results Swenson, R.L. Colored 3D Path have shown that the RMSE error for V-type butt joint path extraction was under 1.1 mm and below Extraction Based on Depth-RGB 0.6 mm for a straight butt joint; in addition, the system seems to be suitable for welding beads of Sensor for Welding Robot Trajectory various shapes. Generation. Automation 2021, 2, 252–265. https://doi.org/10.3390/ Keywords: path model; 3D reconstruction; seam extraction; RGB-D; color segmentation; stereo automation2040016 structured light Communicated bies: Raffaele Carli, Graziana Cavone, Domenico Bianchi and Nicola Epicoco 1. Introduction In the era of globalization, manufacturing industries deal with competitive and un- Received: 13 September 2021 certain markets, where the dynamics of innovations and shortened life cycles products Accepted: 4 November 2021 Published: 5 November 2021 create a problem for the industry to become more productive and flexible; for instance, welding processes are one of the most common tasks in manufacturing industries, and Publisher’s Note: MDPI stays neutral robots equipped with intelligent programming tools represent the best alternatives to with regard to jurisdictional claims in achieve these goals [1]. published maps and institutional affil- Nowadays, there are two main categories of robotic programming methods in indus- iations. trial applications, as well as online and offline programming [2]; however, the time spent programming a new path for a job in high-volume manufacturing industries becomes the main challenge of using welding robots, especially when changes and uncertainties to the geometric shape in products occur, which is why robotic systems based on intelligence and robotic perception are one of the four pillars of research and implementation according to Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. ”Industry 4.0” objectives [3]. This article is an open access article A computer vision system is required to capture the surfaces or features and help distributed under the terms and achieve fast offline programming [2]. However, the obstacle toward achieving an intelligent conditions of the Creative Commons welding robot is solving the problem of trajectory planning, seam tracking, and the control Attribution (CC BY) license (https:// of welding systems against errors caused by light and environment disturbances to which creativecommons.org/licenses/by/ each vision system is exposed [4]. 4.0/). Automation 2021, 2, 252–265. https://doi.org/10.3390/automation2040016 https://www.mdpi.com/journal/automation Automation 2021, 2 253 For example, as regards simple systems using only a single camera as a sensor, Kiddee et al. [5] develop a technique to find a T-welding seem based on image processing to smooth the image and extract the edges of the image by using a canny algorithm, to find the initial and endpoints. In the same way, Ye et al. [6] acquire the edges of a series of images to determine the location of the weld seam using a series of known characteristics. Yang et al. [7] present a welding detection system based on 3D reconstruction technology for the arc welding robot. The shape from the shading SFS algorithm is used to reconstruct the 3D shapes of the welding seam. Laser vision systems are among the most widely used sensors in welding robotics due to the precision and fast data processing that these devices provide. In particular, laser sensors are mostly applied in weld tracking research, where they have been developed since simple systems such as the one by Fernandez et al. [8] that implements a low-cost laser vision system based on a webcam on the robot arm oriented toward the laser stripe projected at a 45 angle, up to systems already proved in the industrial context, for example, the study by Liu et al. [9], in which an autonomous method is proposed to find the initial weld position for a fillet weld seam formed by two steel plates. This method employs an automatic dynamic programming-based laser light inflection point extraction algorithm. The algorithm for this method can support factors induced by natural light that may be present during the processing of laser vision images. Disturbances in laser systems on metallic surfaces are a common problem in weld bead localization. Li et al. [10] suggest reducing the influence of noise on the extraction of the centerline, through the double-threshold recursive least square method. Later, an automatic welding seam recognition and tracking method by utilizing structured light vision to search through a Kalman filter the profile of the welding seam in a small area, aiming to avoid some disturbances [10]. Another approach in structured light systems incorporated an optical filter and LED lighting developed to reduce the effect of noise produced by the arc torch. Where a fuzzy-PID controller can be used to obtain the weld seam in horizontal and vertical directions simultaneously [11]. Recent systems tend to be more robust or complex in terms of the number of tools involved in obtaining images, filtering data. For example, Zeng et al. [12] propose a weld position recognition method based on directional light and structured light information fusion during multi-layer/multi-pass welding. On other hand, Guo et al. [13] present a multifunctional monocular visual sensor based on combined laser structured lights, which has the functions such as the detection of the welding groove cross-sectional parameters, application for the joint tracking, the detection of the welding torch height, the measuring of the weld bead appearance, and the monitoring of welding process in real time. Other approaches for real-time processing are described by Kos et al. [14] to compute the position of the laser beam and the seam in 3D during welding with a camera and illumination laser in order to equalize the brightness of the keyhole and the surrounding area. Zhang et al. [15] acquire the 3D information by multiple segment laser scanning. The weld features are extracted by cubic smoothing spline, to detect the characteristic parameters of weld lap joint with a deviation lower than 0.4 mm. Another research topic in robotic vision is systems that acquire images from two optical devices. In this sense, Chen et al. [16] propose a Canny detector, where the two parallel edges captured in a butt v-joint are used to fitting the value of the start welding position. In a similar way, Dinham et al. [17] use a Hough transform to detect the outside boundary of the weldment so that the background can be removed. In weld tracking systems, Ma et al. [18] use two normal charge-coupled device cameras to capture clear images from two directions—one is used to measure the root gap, and the other is used to measure the geometric parameters of the weld pool. Nowadays, owing to the precision of sensors and to have a complete understanding of the environment, 3D reconstruction techniques have been explored. In reconstruction with laser systems, 3D point cloud data are used to reconstruct welding seam, through the point cloud and guided by a neural network proposed by Xiao et al. [19], which can obtain Automation 2021, 2 254 the equations and initial points of the weld seam. The test results of the guidance prove that the extraction error is less than 0.6 mm, meeting actual production demand. In stereo vision, Yang et al. [20] propose a 3D path teaching method to improve the efficiency of teaching playback based on a stereo-structured light vision system using a seam extraction algorithm, which could achieve fast and accurate seam extraction to modify the model of the weld seam. Their system could well realize a fast and accurate 3D path teaching of a welding robot. Experiment results show a measurement resolution less than 0.7 mm and are suitable for V-type butt joint before welding [21]. In point clouds acquired with RGB-D sensors, Maiolino et al. [22] use an ASUS Xtion sensor to register and integrate the point cloud with the CAD model to perform an offline programming system for sealant dispensing robot. On the other hand, Zhou et al. [23] use an Intel family camera to detect and generate the trajectory with an algorithm based on the gradient of the edge intensity in the point cloud. However, the main limitation of the proposals found in the literature is that they seek to find a solution to a particular type of weld seam. Global path extraction systems are in the process of development; therefore, we find that the integration of color and segmentation of these data have not been the subject of research in welding robotics as a global acquisition system. In this work, a color point cloud segmentation method was implemented to extract 3D paths for robot trajectory generation. The developed system consists of a RealSense D435 sensor, a low-cost device that incorporates technologies such as stereo vision and RGB sensor, with which the 3D reconstruction of a point cloud that incorporates the color of the work object, with this color information a series of filters are applied in the HSV color space to segment the region of interest where the weld bead is expected to be applied. Once captured the zone, a spline cubic interpolation is executed to calculate the path that smoothest the trajectory of the welding points that would require a robotic manipulator. The rest of this paper is organized as follows: Section 2 describes the theory related to vision systems and algorithms to perform our 3D reconstruction processing seam extraction. Section 3 introduces the configuration of our experiment platform and vision sensor, and the results are presented in Section 4. Finally, in Section 5, concluding remarks are provided. 2. Materials and Methods 2.1. Stereo Vision Arrangements that consist of two image sensors (cameras) separated by a known distance are known as stereo systems. The principle of stereoscopy is based on the ability of the human brain to estimate the depth of objects present in the images captured by eyes [24]. In the stereoscopic configuration, two cameras are placed close to each other with parallel optical axes. Both cameras, with centers C and C separated by a distance L R B, called the baseline, have the same focal length f so that the left and right images are in parallel planes. A point in the three-dimensional space P will be projected in different positions, p and p with coordinates (x , y ) and (x , y ), respectively, of the planes of L R L L R R the images because it is seen from slightly different angles. This difference in position is known as disparity and is mathematically described as disparity = x x and is used to L R calculate the z distance in (1) through the geometric relationship [25], as shown in Figure 1. B  f z = (1) disparity Automation 2021, 2, FOR PEER REVIEW 4 Automation 2021, 2 255 Figure 1. Geometric relationship of a stereo camera configuration. The 3D image of target scene at Figure 1. Geometric relationship of a stereo camera configuration. The 3D image of target scene at point P. point P. 2.2. Structured Light 2.2. Structured Light Structured light is an active method to improve depth acquisition by using an external Structured light is an active method to improve depth acquisition by using an light source that provides additional information to the system. Structured light is based external light source that provides additional information to the system. Structured light on the use of active illumination of the scene with a specially designed 2D spatially varying is based on the use of active illumination of the scene with a specially designed 2D intensity pattern, where the camera sensor searches for artificially projected features that spatially varying intensity pattern, where the camera sensor searches for artificially serve as additional information for triangulation [26]. In the present work proposal, the projected features that serve as additional information for triangulation [26]. In the RealSense sensor has an optical projector that uses a pseudo-random binary array to present work proposal, the RealSense sensor has an optical projector that uses a pseudo- produce a grid-indexing strategy of dots. The array is defined by an n  n array encoded 1 2 random binary array to produce a grid-indexing strategy of dots. The array is defined by using a pseudo-random sequence, such that every k by k sub-window over the entire 1 2 array an n1 × is unique n2 array enc [27].oded using a pseudo-random sequence, such that every k1 by k2 sub- window over the entire array is unique [27]. 2.3. Point Cloud 2.3. Poi Depth nt Ccameras loud deliver depth images, in other words, images whose intensity values represent the depth of the point (x, y) in the scene. A point cloud is a data structure used Depth cameras deliver depth images, in other words, images whose intensity values to represent points with three dimensions (X, Y, Z), where the depth is represented by the represent the depth of the point (x, y) in the scene. A point cloud is a data structure used Z coordinate [28]. Once the depth images are available, it is possible to obtain the point to represent points with three dimensions (X, Y, Z), where the depth is represented by the cloud using the intrinsic values of the camera with which the information was acquired. Z coordinate [28]. Once the depth images are available, it is possible to obtain the point This process is known as deprojection; a point P with coordinates (X, Y, Z) can be obtained cloud using the intrinsic values of the camera with which the information was acquired. according to (2, 3, 4) from the depth information D being (x, y) the rectified position of x,y This process is known as deprojection; a point P with coordinates (X, Y, Z) can be obtained the pixel in the sensor, where the variables c , c , f , and f are the intrinsic values of the x y x y according to (2, 3, 4) from the depth information Dx,y being (x, y) the rectified position of camera used to acquire the information, with (f , f ) as the components of the focal length x y the pixel in the sensor, where the variables cx, cy, fx, and fy are the intrinsic values of the and (c , c ) the image projection center [29]. x y camera used to acquire the information, with (fx, fy) as the components of the focal length and (cx, cy) the image projection center [29]. D (C x) x,y x X = (2) D C x , f X= (2) D C y x,y y Y = (3) D y C y (3) Y= Z = D (4) x,y Z=D 2.4. Colored Point Cloud (4) Some 3D sensors are often coupled to an RGB camera, with which research on color depth registration is being carried out. Registering two cameras means knowing the 2.4. Colored Point Cloud relative position and orientation of one scene with respect to another [30]. In principle, Some 3D sensors are often coupled to an RGB camera, with which research on color color integration consists of reprojecting each 3D point onto the RGB image to adopt its depth registration is being carried out. Registering two cameras means knowing the relative color. When reprojected in 3D, the generated point cloud contains six information fields— position and orientation of one scene with respect to another [30]. In principle, color three for spatial coordinates and three with color values. However, due to occlusion, not all integration consists of reprojecting each 3D point onto the RGB image to adopt its color. When reconstructed 3D points in the scene are visible from the RGB camera, so some points may reprojected in 3D, the generated point cloud contains six information fields—three for spatial lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. Automation 2021, 2, FOR PEER REVIEW 5 coordinates and three with color values. However, due to occlusion, not all reconstructed 3D Automation 2021, 2, FOR PEER REVIEW 5 points in the scene are visible from the RGB camera, so some points may lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. Automation 2021, 2 coordinates and three with color values. However, due to occlusion, not all reconstructed 3 256 D points in the scene are visible from the RGB camera, so some points may lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. (a) (b) (a) (b) (c) (c) Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored point cloud through RGB registration. Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored point cloud through RGB registration. point cloud through RGB registration. 3. Experimental Setup 33. . Ex Experimental perimental Se Setup tup The integrated vision system incorporates an RGB-D camera (Intel RealSense D435), The The iintegrated ntegrated vi vision sion sy system stem iincorporates ncorporates aan n R RGB-D GB-D ccamera amera ((Intel Intel R RealSense ealSense D43 D435), 5), an active stereo depth camera to compute the stereo depth data for real time. It also has an active stereo depth camera to compute the stereo depth data for real time. It also has an active stereo depth camera to compute the stereo depth data for real time. It also has an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is physically supported on a test arm that will allow an image acquisition from a top view physically supported on a test arm that will allow an image acquisition from a top view physically supported on a test arm that will allow an image acquisition from a top view of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as shown in Figure 3. shown in Figure 3. shown in Figure 3. Figure 3. Experimental setup: camera mounte d on a pedestal with top view of the working object. Figure 3. Experimental setup: camera mounted on a pedestal with top view of the working object. Figure 3. Experimental setup: camera mounted on a pedestal with top view of the working object. Automation 2021, 2 257 Automation 2021, 2, FOR PEER REVIEW 6 The proposed robotic system consists of an RGB-D camera that captures the surface The proposed robotic system consists of an RGB-D camera that captures the surface point cloud of the workpiece, the welding seam detection algorithm that locates the color point cloud of the workpiece, the welding seam detection algorithm that locates the color seam region in the input point cloud, and the trajectory generation method that processes seam region in the input point cloud, and the trajectory generation method that processes the point set and outputs a 3D welding trajectory. the point set and outputs a 3D welding trajectory. The image acquisition and trajectory planning algorithms implementation were car- The image acquisition and trajectory planning algorithms implementation were ried out on a personal computer with the Windows 10 operating system and operating carried out on a personal computer with the Windows 10 operating system and operating with an Intel i7 CPU @ 2.40 GHz, with the necessary USB 3.0 ports required for the commu- with an Intel i7 CPU @ 2.40 GHz, with the necessary USB 3.0 ports required for the nication with the RealSense D435 camera. communication with the RealSense D435 camera. 3.1. Test Sample 3.1. Test Sample A test object was designed so that the geometric characteristics of the part could be A test object was designed so that the geometric characteristics of the part could be mathematically parametrized. It consists of two parts designed both as a semi-complex mathematically parametrized. It consists of two parts designed both as a semi-complex surface surface with curvature as well as to simulate a V-type welded joint, one of the most with curvature as well as to simulate a V-type welded joint, one of the most investigated in investigated in the literature, with a depth of 5 mm and an angular opening of 90 . The the literature, with a depth of 5 mm and an angular opening of 90°. The assembly of these two assembly of these two pieces results in a test piece of 20  10 cm having 4.8 cm at its highest pieces results in a test piece of 20 × 10 cm having 4.8 cm at its highest part. part. The CAD mo The CAD models dels oof f FFigur iguree 4 show 4 show the d the design esign of the t of the test est piece th piece that at was wasfabr fabricated icated in in al aluminum uminum 6 6061 061 T T6,6,conside consider ring ing t thathaluminum at aluminum i is a highly s a hmoldable ighly moldab and rle eflective and refl material, ective which could serve as a parameter to measuring light disturbances in the vision system. It material, which could serve as a parameter to measuring light disturbances in the vision is important to note that the sample part was machined with tungsten carbide milling tools system. It is important to note that the sample part was machined with tungsten carbide whose toolpaths were programmed in WorkNC CAM software; the machining parameters milling tools whose toolpaths were programmed in WorkNC CAM software; the are listed in Table 1. The machining was performed on a HAAS VF3 CNC machine, to machining parameters are listed in Table 1. The machining was performed on a HAAS match the part to the CAD model, because machines such as these report positioning errors VF3 CNC machine, to match the part to the CAD model, because machines such as these below 0.05 mm. report positioning errors below 0.05 mm. (a) (b) (c) (d) Figure 4. The model of the curved V-type butt joint with a red marker: ( a) front view; ( b) right Figure 4. The model of the curved V-type butt joint with a red marker: (a) front view; (b) right view; view; ( c) top view; (d) isometric view. (c) top view; (d) isometric view. Automation 2021, 2 258 Automation 2021, 2, FOR PEER REVIEW 7 Table 1. Machining parameters for the workpiece manufacture. Milling parameters: Vc = cutting Table 1. Machining parameters for the workpiece manufacture. Milling parameters: Vc = cutting speed, RPM = spindle revolution per minute; F = feed rate. speed, RPM = spindle revolution per minute; F = feed rate. Tool Path Tool- Vc (m/min) RPM F (mm) Tool Path Tool- Vc (m/min) RPM F (mm) Facing Facer 2.5” 650 3500 300 Facing Facer 2.5” 650 3500 300 Pocketing Flat 0.25” 120 6000 7 Pocketing Flat 0.25” 120 6000 7 Drilling Drill 0.203” 50 3048 6 Drilling Drill 0.203” 50 3048 6 Tangent to curve Flat 1.0” 350 4500 40 Tangent to curve Flat 1.0” 350 4500 40 Wall machining Flat 0.5” 200 5000 30 Wall machining Flat 0.5” 200 5000 30 Z level Flat 0.437” 250 5500 47 Z level Flat 0.437” 250 5500 47 Z finishing Z finishing Ball 0.25” Ball 0.25” 100 100 6000 6000 32 32 3.2. Trajectory Extraction Based on Stereo Vision System Embedding Color Data 3.2. Trajectory Extraction Based on Stereo Vision System Embedding Color Data Figure 5 shows the steps necessary for the definition of parameters and processing of Figure 5 shows the steps necessary for the definition of parameters and processing of the images that will carry out the extraction of the points corresponding to the weld bead. the images that will carry out the extraction of the points corresponding to the weld bead. Next, the objective of each block was defined. Next, the objective of each block was defined. Figure 5. The flowchart of path extraction. Figure 5. The flowchart of path extraction. Set up the data acquisition parameters: Image acquisition and processing was Set up the data acquisition parameters: Image acquisition and processing was per- performed by Intel SDK [32], which, as an open source software program, has support for formed by Intel SDK [32], which, as an open source software program, has support for dif different pro ferent programming gramming language languages, s, suc such h as python, t as python, thr hrough the p ough the pyr yrealsen ealsense2 se2 library, the library, the of official python wrapper ficial python wrapper.. Since Since the imp the implemented lemented visio vision n system system has has dif different ferent sensors, sensors, both both color and depth sensors were set to a resolution of 640  480 pixels and a frame rate of color and depth sensors were set to a resolution of 640 × 480 pixels and a frame rate of 30 30 fps, with a depth accuracy between 0.1 to 1 mm. fps, with a depth accuracy between 0.1 to 1 mm. Acquire and align depth and color frame information: It is necessary to align the Acquire and align depth and color frame information: It is necessary to align the depth and color frames to make a 3D reconstruction faithful to the captured scene. This depth and color frames to make a 3D reconstruction faithful to the captured scene. This was achieved through the pyrealsense2 library [32], which has an algorithm that aligns the was achieved through the pyrealsense2 library [32], which has an algorithm that aligns depth image with another image, in this case, the color image. the depth image with another image, in this case, the color image. Segment and remove the background data: Sometimes, we seek to process a region Segment and remove the background data: Sometimes, we seek to process a region of interest (ROI); in this case, the ROI is defined by the distance at which the test object of interest (ROI); in this case, the ROI is defined by the distance at which the test object is is located relative to the camera. Therefore, we first planned a filter using one of the located relative to the camera. Therefore, we first planned a filter using one of the device’s device’s own tools to acquire the images [32], where a depth clipping distance in which own tools to acquire the images [32], where a depth clipping distance in which all all information beyond our ROI was segmented and removed instead of using all the information beyond our ROI was segmented and removed instead of using all the information in the scene. information in the scene. Point-cloud calculation from depth and color-aligned frames: The pyrealsense2 li- Point-cloud calculation from depth and color-aligned frames: The pyrealsense2 brary [32] was used to calculate the point cloud since it has the intrinsic values of the stereo library [32] was used to calculate the point cloud since it has the intrinsic values of the vision system and can perform the calculations for the point cloud acquisition, in addition stereo vision system and can perform the calculations for the point cloud acquisition, in to registering the color of the aligned frame. addition to registering the color of the aligned frame. Color segmentation: This block represents the core of the proposed methodology that Color segmentation: This block represents the core of the proposed methodology that segment the welding area from the rest of the surface, in which the image was preprocessed segment the welding area from the rest of the surface, in which the image was considering the brightness of the scene to binarize the color image and look for the threshold preprocessed considering the brightness of the scene to binarize the color image and look at which a single frame of the point-cloud was vectorized to an XYZRGB format to using for the threshold at which a single frame of the point-cloud was vectorized to an XYZRGB the Numpy and OpenCV library tools. However, to improve the selection of the points of format to using the Numpy and OpenCV library tools. However, to improve the selection interest, a change in the color space to hue saturation value (HSV) was used. The threshold of the points of interest, a change in the color space to hue saturation value (HSV) was was applied to the hue channel to find the color region, as well as to the saturation channel used. The threshold was applied to the hue channel to find the color region, as well as to as a parameter for brightness. the saturation channel as a parameter for brightness. Trajectory planning: In order to calculate the trajectory from the color market seg- Trajectory planning: In order to calculate the trajectory from the color market segmented mented data in the previous module, following the methodology of Zhang et al. [15], a data in the previous module, following the methodology of Zhang et al. [15], a cubic B-spline cubic B-spline interpolation algorithm was implemented to approximate the nonlinear interpolation algorithm was implemented to approximate the nonlinear dataset, the function dataset, the function was divided by knot points, and between the knots, the subset of data was divided by knot points, and between the knots, the subset of data points a 5th order points a 5th order polynomial curve was applied to satisfies a smoothness requirement to polynomial curve was applied to satisfies a smoothness requirement to the target weld seam Automation 2021, 2 259 Automation 2021, 2, FOR PEER REVIEW 8 the target weld seam points. It was planned that the trajectory would be smooth enough to be applied directly to the robot through a transformation matrix referenced to the welding points. It was planned that the trajectory would be smooth enough to be applied directly to direction. the robot through a transformation matrix referenced to the welding direction. 3.3. 3D Reconstruction with RealSense D435 Sensor 3.3. 3D Reconstruction with RealSense D435 Sensor Before an in-depth analysis of the results of the trajectory extraction by the proposed Before an in-depth analysis of the results of the trajectory extraction by the proposed algorithm, a study of the proposed vision system is necessary to evaluate the performance algorithm, a study of the proposed vision system is necessary to evaluate the performance of the RealSense camera. We proceeded to execute the methodology described by Carfagni of the RealSense camera. We proceeded to execute the methodology described by Carfagni et al. [33], which evaluates the reconstruction capability of D415 and SR300 sensors, seeking et al. [33], which evaluates the reconstruction capability of D415 and SR300 sensors, to measure the error with which the sensor can reconstruct a surface. To this end, the seeking to measure the error with which the sensor can reconstruct a surface. To this end, RealSense D435 camera was located 30 cm away at the top of a flat surface where the test the RealSense D435 camera was located 30 cm away at the top of a flat surface where the piece was placed. With this configuration, the 3D reconstruction of the surface was carried test piece was placed. With this configuration, the 3D reconstruction of the surface was out through the first three blocks of the algorithm presented in the previous section to carried out through the first three blocks of the algorithm presented in the previous finally obtain the point cloud of the test piece. section to finally obtain the point cloud of the test piece. The real point cloud of the test piece was generated by the CAD model exporting The real point cloud of the test piece was generated by the CAD model exporting the the pieces to a Polygon File Format (.ply), as shown in Figure 6. Once we had the target pieces to a Polygon File Format (.ply), as shown in Figure 6. Once we had the target surface surface and the one calculated by the camera, we proceeded to run an ICP color registration and the one calculated by the camera, we proceeded to run an ICP color registration algorithm [30] with which we could estimate the Euclidean point distance between the algorithm [30] with which we could estimate the Euclidean point distance between the target and the 3D reconstruction surface. target and the 3D reconstruction surface. (a) (b) Figure 6. 3D Reconstruction evaluation: (a) target point cloud; (b) the result of ICP color registration Figure 6. 3D Reconstruction evaluation: (a) target point cloud; (b) the result of ICP color registration between target and 3D. between target and 3D. 4. Results 4. Results 4.4.1. 1. RRealSense ealSense D4 D435 35 3D3D ReReconstruction construction PerPerformance formance The RealSense D435 camera was evaluated following the methodology described in The RealSense D435 camera was evaluated following the methodology described in Section 3.3. Figure 6 shows the result of the registration between both point clouds that Section 3.3. Figure 6 shows the result of the registration between both point clouds that provide the distance between the points of the 3D reconstruction to the closest point on provide the distance between the points of the 3D reconstruction to the closest point on the the target surface. Three tests were carried out, and the results are shown in Table 2, in target surface. Three tests were carried out, and the results are shown in Table 2, in which which the computed average distance and standard deviation are listed. the computed average distance and standard deviation are listed. Table 2. RealSense D435 evaluation to perform a 3D reconstruction. Table 2. RealSense D435 evaluation to perform a 3D reconstruction. Average Standard Deviation Average Standard Deviation Test 1 0.704 mm 0.378 mm Test 1 0.704 mm 0.378 mm Test 2 1.053 mm 0.623 mm Test 2 1.053 mm 0.623 mm Test 3 1.284 mm 0.738 mm Test 3 1.284 mm 0.738 mm 4.2. Trajectory Extraction of the Weld Bead by Colorimetry Point Cloud Segmentation 4.2. Trajectory Extraction of the Weld Bead by Colorimetry Point Cloud Segmentation The RealSense sensor, by default, provides the color information of the scene in an The RealSense sensor, by default, provides the color information of the scene in an RGB RGB color space (RED, GREEN, BLUE), in the range of 0 to 255. To carry out this color space (RED, GREEN, BLUE), in the range of 0 to 255. To carry out this experiment, experiment, the color markers used in this color segmentation study were made in these the color markers used in this color segmentation study were made in these primary colors. primary colors. However, as mentioned before, the segmentation was performed by However, as mentioned before, the segmentation was performed by applying a threshold Automation 2021, 2, FOR PEER REVIEW 9 Automation 2021, 2 260 applying a threshold in the HSV color space channels. Table 3 shows the thresholds in the HSV color space channels. Table 3 shows the thresholds applied to achieve the applied to achieve the segmentation of each color marker. segmentation of each color marker. Table 3. Table 3.Color Color threshold for threshold for point cloud point cloud segmentation segmentation by colorimetry. by colorimetry. Hue Hue SaturationSaturation Red 160–180 100–255 Red 160–180 100–255 Green 30–50 100–255 Green 30–50 100–255 Blue 110–120 50–255 Blue 110–120 50–255 Figure 7 shows the result of generating the point cloud of the test piece to which a Figure 7 shows the result of generating the point cloud of the test piece to which a red color marker was applied in the weld zone—on the left is the target point cloud with red color marker was applied in the weld zone—on the left is the target point cloud with color information in HSV color space, while the image on the right shows the result of the color information in HSV color space, while the image on the right shows the result of the segmentation of the weld bead by applying the color filter to the point cloud. segmentation of the weld bead by applying the color filter to the point cloud. (a) (b) (c) (d) Figure 7. Color Segmentation: (a) RGB image; (b) image with HSV transformation; (c) point cloud Figure 7. Color Segmentation: (a) RGB image; (b) image with HSV transformation; (c) point cloud with HSV data; (d) points of seam filter by color segmentation. with HSV data; (d) points of seam filter by color segmentation. 4.3. Testing Trajectory Extraction of a V-Type Butt Joint 4.3. Testing Trajectory Extraction of a V-Type Butt Joint In this stage, the experimentation of the algorithm was implemented in its totality, as In this stage, the experimentation of the algorithm was implemented in its totality, as presented in Section 3.2, in which once the intended zone for applying the weld bead was presented in Section 3.2, in which once the intended zone for applying the weld bead was captured, the algorithm implements a spline cubic interpolation, which calculates the path captured, the algorithm implements a spline cubic interpolation, which calculates the path that smoothest the planning of the welding points that would require a robotic manipulator. that smoothest the planning of the welding points that would require a robotic manipulator. Figure 8 shows the smooth computed trajectory over the reconstructed surface. Figure 8 shows the smooth computed trajectory over the reconstructed surface. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest Euclidean distance between the two trajectories. Finally, the RMSE error of each of the points between the target trajectory and the computed trajectory were calculated to verify the fitting results. Both trajectories are shown in Figure 9. Automation 2021, 2, FOR PEER REVIEW 10 Automation 2021, 2 261 Automation 2021, 2, FOR PEER REVIEW 10 (a) (b) Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the computed path. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest (a) (b) Euclidean distance between the two trajectories. Finally, the RMSE error of each of the Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the points between the target trajectory and the computed trajectory were calculated to verify computed path. computed path. the fitting results. Both trajectories are shown in Figure 9. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest Euclidean distance between the two trajectories. Finally, the RMSE error of each of the points between the target trajectory and the computed trajectory were calculated to verify the fitting results. Both trajectories are shown in Figure 9. Figure 9. Workpiece target path vs. computed trajectory. Figure 9. Workpiece target path vs. computed trajectory. Table 4 shows the RMSE values in three tests that were conducted, where an offset Table 4 shows the RMSE values in three tests that were conducted, where an offset with the surface Z axis can be observed. Comparing these results with the work of Yang with the surface Z axis can be observed. Comparing these results with the work of Yang et al. [21], we can infer that in some tests, we have comparable results in the Z error; et al. [21], we can infer that in some tests, we have comparable results in the Z error; Figure 9. however Workpiece t , the errorarange rget path vs. computed is higher, oscillating trajectory. between 1.15 and 0.75 mm. however, the error range is higher, oscillating between 1.15 and 0.75 mm. Table 4 shows the RMSE values in three tests that were conducted, where an offset Table 4. Trajectory RMSE error for V-type butt joint. with the surfa Table 4. Trajectory RMSE err ce Z axis can be observed. Comp or for V-type butt joint. aring these results with the work of Yang X Y Z et al. [21], we can infer that in some tests, we have comparable results in the Z error; X Y Z Test 1 0.063 mm 0. 184 mm 0.952 mm however, the error range is higher, oscillating between 1.15 and 0.75 mm. Test 2 0.046 mm 0.195 mm 1.059 mm Test 1 0.063 mm 0. 184 mm 0.952 mm Test 3 0.010 mm 0.145 mm 0.739 mm Table 4. Trajectory RMSE error for V-type butt joint. Test 2 0.046 mm 0.195 mm 1.059 mm Test 3 0.010 mm 0.145 mm 0.739 mm X Y Z To have another control parameter in the results, we proceeded to calculate the Test 1 0.063 mm 0. 184 mm 0.952 mm Euclidean distance between the calculated trajectory and the desired CAD model trajectory. To have another control parameter in the results, we proceeded to calculate the Test 2 0.046 mm 0.195 mm 1.059 mm Table 5 shows a dispersion of the points in the trajectory with a standard deviation less Euclidean distance between the calculated trajectory and the desired CAD model Test 3 0.010 mm 0.145 mm 0.739 mm than 0.5 mm. To have another control parameter in the results, we proceeded to calculate the Euclidean distance between the calculated trajectory and the desired CAD model Automation 2021, 2, FOR PEER REVIEW 11 Automation 2021, 2 262 trajectory. Table 5 shows a dispersion of the points in the trajectory with a standard deviation less than 0.5 mm. Table 5. Average and standard deviation between CAD and computed trajectory for V-type butt joint. Table 5. Average and standard deviation between CAD and computed trajectory for V-type butt joint. Average Standard Deviation Average Standard Deviation Test 1 0.70 mm 0.30 mm Test 1 0.70 mm 0.30 mm Test 2 0.80 mm 0.30 mm Test 2 0.80 mm 0.30 mm Test 3 0.80 mm 0.30 mm Test 3 0.80 mm 0.30 mm 4.4. Testing Trajectory Extraction of a Straight Butt Joint 4.4. Testing Trajectory Extraction of a Straight Butt Joint Straight shape is a basic welding joint type commonly used in the industry, so a Straight shape is a basic welding joint type commonly used in the industry, so a straight butt joint was constructed with a length of 20 cm and an inclination of 3° above straight butt joint was constructed with a length of 20 cm and an inclination of 3 above the the surface to demonstrate the flexibility of the system. Applying the previous algorithms, surface to demonstrate the flexibility of the system. Applying the previous algorithms, it was also possible to extract this trajectory. Figure 10 shows the tested surface reconstruction, it was also possible to extract this trajectory. Figure 10 shows the tested surface to which a straight blue line was applied, and the trajectory calculated over the point cloud reconstruction, to which a straight blue line was applied, and the trajectory calculated surface. over the point cloud surface. (a) (b) (c) Figure 10. Straight butt joint trajectory extraction: (a) workpiece; (b) point cloud with HSV data; (c) surface with the Figure 10. Straight butt joint trajectory extraction: (a) workpiece; (b) point cloud with HSV data; (c) surface with the computed path. computed path. Table 6 shows the RMSE values, the average, and standard deviation between the Table 6 shows the RMSE values, the average, and standard deviation between the calculated trajectory and the desired line model trajectory shown in Figure 11, within three calculated trajectory and the desired line model trajectory shown in Figure 11, within three tests that were conducted. Similar findings to previous results in RMSE and standard tests that were conducted. Similar findings to previous results in RMSE and standard deviation show the flexibility of the system as a global acquisition system regardless of the deviation show the flexibility of the system as a global acquisition system regardless of workpiece. the workpiece. Table 6. Trajectory RMSE error for V-type butt joint. Table 6. Trajectory RMSE error for V-type butt joint. X Y Z Average Standard Deviation X Y Z Average Standard Deviation Test 1 0.142 mm 0.075 mm 0.683 mm 0.60 mm 0.20 mm Test 2 0.124 mm 0.072 mm 0.530 mm 0.50 mm 0.20 mm Test 1 0.142 mm 0.075 mm 0.683 mm 0.60 mm 0.20 mm Test 3 0.180 mm 0.069 mm 0.494 mm 0.50 mm 0.20 mm Test 2 0.124 mm 0.072 mm 0.530 mm 0.50 mm 0.20 mm Test 3 0.180 mm 0.069 mm 0.494 mm 0.50 mm 0.20 mm Automation 2021, 2 263 Automation 2021, 2, FOR PEER REVIEW 12 Figure 11. Target straight butt joint trajectory vs. computed trajectory. Figure 11. Target straight butt joint trajectory vs. computed trajectory. 5. Conclusions 5. Conclusions To improve the efficiency of programming welding robots, this study proposed a To improve the efficiency of programming welding robots, this study proposed a color point cloud segmentation system to extract 3D paths. The major conclusions are color point cloud segmentation system to extract 3D paths. The major conclusions are generalized as follows: generalized as follows: (1) A welding robot sensor based on stereo vision and RGB sensor was implemented in (1) A welding robot sensor based on stereo vision and RGB sensor was implemented in this paper that could finish the 3D color reconstruction task of welding workpiece, this paper that could finish the 3D color reconstruction task of welding workpiece, with a reconstruction standard deviation less than 1 mm, which is a parameter with a reconstruction standard deviation less than 1 mm, which is a parameter comparable to that shown by Carfagni [33] for similar devices. comparable to that shown by Carfagni [33] for similar devices. (2) In order to achieve quick and robust weld 3D path extraction, a color segmentation (2) In order to achieve quick and robust weld 3D path extraction, a color segmentation based on color point cloud reconstruction was performed, with thresholds in HSV based on color point cloud reconstruction was performed, with thresholds in HSV color space and an interpolation of the segmented points. The trajectory extraction color space and an interpolation of the segmented points. The trajectory extraction results show errors close to or below 1.1 mm for V-type butt joint and below 0.6 mm results show errors close to or below 1.1 mm for V-type butt joint and below 0.6 mm for a straight butt joint, comparable with other stereo vision studies; for example, for a straight butt joint, comparable with other stereo vision studies; for example, Yang et al. [20] show that the measurement resolution is less than 0.7 mm for V-type Yang et al. [20] show that the measurement resolution is less than 0.7 mm for V-type butt joint, and in contrast, Zhou et al. [23] show a pose accuracy RMSE of 0.8 mm for butt joint, and in contrast, Zhou et al. [23] show a pose accuracy RMSE of 0.8 mm for a cylinder butt joint using a RealSense D415 sensor. a cylinder butt joint using a RealSense D415 sensor. (3) In addition to the above, the adaptability of the proposed trajectory extraction system, (3) In addition to the above, the adaptability of the proposed trajectory extraction system, due to being a global capture system, shows results that encourage experimentation due to being a global capture system, shows results that encourage experimentation in in V-type welding as one of the more studied in the literature, but also in other types V-type welding as one of the more studied in the literature, but also in other types of of welding that would give a differential over most of the proposals found in the welding that would give a differential over most of the proposals found in the literature. literature. In the future, we aim to improve and complete our work. Firstly, we plan to conduct In the future, we aim to improve and complete our work. Firstly, we plan to conduct experiments experiments on on diffe different rent t test est piece pieces s and and dem demonstrate onstrate that that the proposed meth the proposed method od is is also also suitable suitable for for d difi fffer erent ent weld weld b beads. eadsIn . In addition, addition, we we seek seek to to analyze analyze and and extract extract the the t trajectory rajectory without without applying applyinga a co color lor marker marker ,, looking looking fo for r th the e sh shadows adows or or sh shines ines that that ar ar e ege generated nerated in inthe weld the welding ing region. region.Fin Finally ally, the , the me measur asurement precision ement precision need needss to to be be impr improve oveddwith with a a quality quality test of the proposed test of the proposed method method against a against a laser laser sensor. sensor. Author Contributions: Conceptualization, J.B.R.-S., E.C.-U. and A.G.-E.; methodology, J.B.R.-S., Author Contributions: Conceptualization, J.B.R.-S., E.C.-U., and A.G.-E.; methodology, J.B.R.-S., E.C.-U., J.A.E.C., R.L.S. and A.G.-E.; software, J.B.R.-S., J.A.E.C. and R.L.S.; validation J.B.R.-S.; formal E.C.-U., J.A.E.C., R.L.S., and A.G.-E.; software, J.B.R.-S., J.A.E.C., and R.L.S.; validation J.B.R.-S.; analysis, formal analysis, J J.B.R.-S., E.C.-U., .B.R.-S., J.A.E.C., E.C.-U., R.L.S. J.A.E. and C., A.G.-E.; R.L.S., and investigation, A.G.-E.; in J.B.R.-S.; vestigati ro esour n, J.B. ces, R.-S. J.B.R.-S. ; resourc and es, J.A.E.C.; data curation, J.B.R.-S.; writing—original draft preparation, J.B.R.-S.; writing—review and J.B.R.-S. and J.A.E.C.; data curation, J.B.R.-S.; writing—original draft preparation, J.B.R.-S.; editing, writing—review and e J.B.R.-S., E.C.-U., diting, J.A.E.C., J.B.R R.L.S. .-S., E.C and .-U A.G.-E.; ., J.A.E.C. visualization, , R.L.S., and A.G J.B.R.-S.; .-E.; vsupervision, isualization, J. E.C.-U. B.R.-S.; and supervision, A.G.-E.; pr Eoject .C.-U. and administration, A.G.-E.; project a J.B.R.-S. dministra and A.G.-E.; tion, J funding .B.R.-S. and acquisition, A.G.-E.;A.G.-E. funding acqu All authors isition, have A.Gr .-E ead . All and au agr thors have read eed to the published and agreed to version th of e publishe the manuscript. d version of the manuscript. Funding: Funding: This This r esear research rece ch received ived no no external external funding. funding. Automation 2021, 2 264 Data Availability Statement: The study did not report any data. Acknowledgments: Authors would like to acknowledge the support of Tecnologico de Monterrey and the financial support from CONACyT for the MSc studies of one of the authors (J.B.R.-S.). Conflicts of Interest: The authors declare no conflict of interest. References 1. Ogbemhe, J.; Mpofu, K. Towards achieving a fully intelligent robotic arc welding: A review. Ind. Robot Int. J. 2015, 42, 475–484. [CrossRef] 2. Pan, Z.; Polden, J.; Larkin, N.; Van Duin, S.; Norrish, J. Recent progress on programming methods for Industrial Robots. Robot. Comput. Integr. Manuf. 2012, 28, 87–94. [CrossRef] 3. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review. Sensors 2016, 16, 335. [CrossRef] 4. Lei, T.; Rong, Y.; Wang, H.; Huang, Y.; Li, M. A review of vision-aided robotic welding. Comput. Ind. 2020, 123, 103326. [CrossRef] 5. Kiddee, P.; Fang, Z.; Tan, M. Visual recognition of the initial and end points of lap joint for welding robots. In 2014 IEEE International Conference on Information and Automation (ICIA); IEEE: Piscataway, NJ, USA, 2014. [CrossRef] 6. Ye, Z.; Fang, G.; Chen, S.; Dinham, M. A robust algorithm for weld seam extraction based on prior knowledge of weld seam. Sens. Rev. 2013, 33, 125–133. [CrossRef] 7. Yang, L.; Li, E.; Long, T.; Fan, J.; Mao, Y.; Fang, Z.; Liang, Z. A welding quality detection method for arc welding robot based on 3D reconstruction with SFS algorithm. Int. J. Adv. Manuf. Technol. 2017, 94, 1209–1220. [CrossRef] 8. Villan, A.F.; Acevedo, R.G.; Alvarez, E.A.; Campos-Lopez, A.M.; Garcia-Martinez, D.F.; Fernandez, R.U.; Meana, M.J.; Sanchez, J.M.G. Low-cost system for weld tracking based on artificial vision. IEEE Trans. Ind. Appl. 2011, 47, 1159–1167. [CrossRef] 9. Liu, F.Q.; Wang, Z.Y.; Ji, Y. Precise initial weld position identification of a fillet weld seam using laser vision technology. Int. J. Adv. Manuf. Technol. 2018, 99, 2059–2068. [CrossRef] 10. Li, X.; Li, X.; Ge, S.S.; Khyam, M.O.; Luo, C. Automatic welding Seam tracking and identification. IEEE Trans. Ind. Electron. 2017, 64, 7261–7271. [CrossRef] 11. Fan, J.; Jing, F.; Yang, L.; Teng, L.; Tan, M. A precise initial weld point guiding method of micro-gap weld based on structured light vision sensor. IEEE Sens. J. 2019, 19, 322–331. [CrossRef] 12. Zeng, J.; Chang, B.; Du, D.; Wang, L.; Chang, S.; Peng, G.; Wang, W. A Weld Position Recognition Method Based on Directional and Structured Light Information Fusion in Multi-Layer/Multi-Pass Welding. Sensors 2018, 18, 129. [CrossRef] [PubMed] 13. Guo, J.; Zhu, Z.; Sun, B.; Yu, Y. A novel multifunctional visual sensor based on combined laser structured lights and its anti-jamming detection algorithms. Weld. World 2018, 63, 313–322. [CrossRef] 14. Kos, M.; Arko, E.; Kosler, H.; Jezeršek, M. Remote laser welding with in-line adaptive 3D seam tracking. Int. J. Adv. Manuf. Technol. 2019, 103, 4577–4586. [CrossRef] 15. Zhang, K.; Yan, M.; Huang, T.; Zheng, J.; Li, Z. 3D reconstruction of complex spatial weld seam for autonomous welding by laser structured light scanning. J. Manuf. Process. 2019, 39, 200–207. [CrossRef] 16. Chen, X.Z.; Chen, S.B. The autonomous detection and guiding of start welding position for arc welding robot. Ind. Robot Int. J. 2010, 37, 70–78. [CrossRef] 17. Dinham, M.; Fang, G. Autonomous weld seam identification and localisation using eye-in-hand stereo vision for robotic arc welding. Robot. Comput. Integr. Manuf. 2013, 29, 288–301. [CrossRef] 18. Ma, H.; Wei, S.; Lin, T.; Chen, S.; Li, L. Binocular vision system for both weld pool and root gap in robot welding process. Sens. Rev. 2010, 30, 116–123. [CrossRef] 19. Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators A Phys. 2019, 297, 111533. [CrossRef] 20. Yang, L.; Li, E.; Long, T.; Fan, J.; Liang, Z. A novel 3-d path extraction method for arc welding robot based on stereo structured light sensor. IEEE Sens. J. 2019, 19, 763–773. [CrossRef] 21. Yang, L.; Liu, Y.; Peng, J.; Liang, Z. A novel system for off-line 3D seam extraction and path planning based on point cloud segmentation for arc welding robot. Robot. Comput. Integr. Manuf. 2020, 64, 101929. [CrossRef] 22. Maiolino, P.; Woolley, R.; Branson, D.; Benardos, P.; Popov, A.; Ratchev, S. Flexible robot sealant dispensing cell using RGB-D sensor and off-line programming. Robot. Comput. Integr. Manuf. 2017, 48, 188–195. [CrossRef] 23. Zhou, P.; Peng, R.; Xu, M.; Wu, V.; Navarro-Alarcon, D. Path planning with automatic seam extraction over point cloud models for robotic arc welding. IEEE Robot. Autom. Lett. 2021, 6, 5002–5009. [CrossRef] 24. Tippetts, B.; Lee, D.J.; Lillywhite, K.; Archibald, J. Review of stereo vision algorithms and their suitability for resource-limited systems. J. Real-Time Image Process. 2013, 11, 5–25. [CrossRef] 25. Ke, F.; Liu, H.; Zhao, D.; Sun, G.; Xu, W.; Feng, W. A high precision image registration method for measurement based on the stereo camera system. Optik 2020, 204, 164186. [CrossRef] 26. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [CrossRef] 27. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128. [CrossRef] Automation 2021, 2 265 28. Bi, Z.M.; Wang, L. Advances in 3D data acquisition and processing for industrial applications. Robot. Comput. Integr. Manuf. 2010, 26, 403–413. [CrossRef] 29. Laganiere, R.; Gilbert, S.; Roth, G. Robust object pose estimation from feature-based stereo. IEEE Trans. Instrum. Meas. 2006, 55, 1270–1280. [CrossRef] 30. Park, J.; Zhou, Q.-Y.; Koltun, V. Colored point cloud registration revisited. In 2017 IEEE International Conference on Computer Vision (ICCV); IEEE: Piscataway, NJ, USA, 2017. [CrossRef] 31. Huang, X.; Zhang, J.; Wu, Q.; Fan, L.; Yuan, C. A coarse-to-fine algorithm for matching and registration in 3d CROSS-SOURCE point clouds. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2965–2977. [CrossRef] 32. Grunnet-Jepsen, A.; Tong, D. Depth Post-Processing FOR Intel®REALSENSE™ Depth Camera D400 Series. Available online: https://dev.intelrealsense.com/docs/depth-post-processing (accessed on 12 September 2021). 33. Carfagni, M.; Furferi, R.; Governi, L.; Santarelli, C.; Servi, M.; Uccheddu, F.; Volpe, Y. Metrological and Critical Characterization of the Intel D415 Stereo Depth Camera. Sensors 2019, 19, 489. [CrossRef] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Automation Multidisciplinary Digital Publishing Institute

Colored 3D Path Extraction Based on Depth-RGB Sensor for Welding Robot Trajectory Generation

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/colored-3d-path-extraction-based-on-depth-rgb-sensor-for-welding-robot-5e24HX3okL
Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2021 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer The statements, opinions and data contained in the journals are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Terms and Conditions Privacy Policy
ISSN
2673-4052
DOI
10.3390/automation2040016
Publisher site
See Article on Publisher Site

Abstract

Article Colored 3D Path Extraction Based on Depth-RGB Sensor for Welding Robot Trajectory Generation Alfonso Gómez-Espinosa * , Jesús B. Rodríguez-Suárez , Enrique Cuan-Urquizo , Jesús Arturo Escobedo Cabello and Rick L. Swenson Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Querétaro 76130, Mexico; jbraian.rguez@gmail.com (J.B.R.-S.); ecuanurqui@tec.mx (E.C.-U.); arturo.escobedo@tec.mx (J.A.E.C.); rswenson@tec.mx (R.L.S.) * Correspondence: agomeze@tec.mx; Tel.: +52-(442)-238-3302 Abstract: The necessity for intelligent welding robots that meet the demand in real industrial production, according to the objectives of Industry 4.0, has been supported owing to the rapid development of computer vision and the use of new technologies. To improve the efficiency in weld location for industrial robots, this work focuses on trajectory extraction based on color features identification on three-dimensional surfaces acquired with a depth-RGB sensor. The system is planned to be used with a low-cost Intel RealSense D435 sensor for the reconstruction of 3D models based on stereo vision and the built-in color sensor to quickly identify the objective trajectory, since the parts to be welded are previously marked with different colors, indicating the locations of the welding trajectories to be followed. This work focuses on 3D color segmentation with which the Citation: Gómez-Espinosa, A.; points of the target trajectory are segmented by color thresholds in HSV color space and a spline Rodríguez-Suárez, J.B.; Cuan-Urquizo, E.; Cabello, J.A.E.; cubic interpolation algorithm is implemented to obtain a smooth trajectory. Experimental results Swenson, R.L. Colored 3D Path have shown that the RMSE error for V-type butt joint path extraction was under 1.1 mm and below Extraction Based on Depth-RGB 0.6 mm for a straight butt joint; in addition, the system seems to be suitable for welding beads of Sensor for Welding Robot Trajectory various shapes. Generation. Automation 2021, 2, 252–265. https://doi.org/10.3390/ Keywords: path model; 3D reconstruction; seam extraction; RGB-D; color segmentation; stereo automation2040016 structured light Communicated bies: Raffaele Carli, Graziana Cavone, Domenico Bianchi and Nicola Epicoco 1. Introduction In the era of globalization, manufacturing industries deal with competitive and un- Received: 13 September 2021 certain markets, where the dynamics of innovations and shortened life cycles products Accepted: 4 November 2021 Published: 5 November 2021 create a problem for the industry to become more productive and flexible; for instance, welding processes are one of the most common tasks in manufacturing industries, and Publisher’s Note: MDPI stays neutral robots equipped with intelligent programming tools represent the best alternatives to with regard to jurisdictional claims in achieve these goals [1]. published maps and institutional affil- Nowadays, there are two main categories of robotic programming methods in indus- iations. trial applications, as well as online and offline programming [2]; however, the time spent programming a new path for a job in high-volume manufacturing industries becomes the main challenge of using welding robots, especially when changes and uncertainties to the geometric shape in products occur, which is why robotic systems based on intelligence and robotic perception are one of the four pillars of research and implementation according to Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. ”Industry 4.0” objectives [3]. This article is an open access article A computer vision system is required to capture the surfaces or features and help distributed under the terms and achieve fast offline programming [2]. However, the obstacle toward achieving an intelligent conditions of the Creative Commons welding robot is solving the problem of trajectory planning, seam tracking, and the control Attribution (CC BY) license (https:// of welding systems against errors caused by light and environment disturbances to which creativecommons.org/licenses/by/ each vision system is exposed [4]. 4.0/). Automation 2021, 2, 252–265. https://doi.org/10.3390/automation2040016 https://www.mdpi.com/journal/automation Automation 2021, 2 253 For example, as regards simple systems using only a single camera as a sensor, Kiddee et al. [5] develop a technique to find a T-welding seem based on image processing to smooth the image and extract the edges of the image by using a canny algorithm, to find the initial and endpoints. In the same way, Ye et al. [6] acquire the edges of a series of images to determine the location of the weld seam using a series of known characteristics. Yang et al. [7] present a welding detection system based on 3D reconstruction technology for the arc welding robot. The shape from the shading SFS algorithm is used to reconstruct the 3D shapes of the welding seam. Laser vision systems are among the most widely used sensors in welding robotics due to the precision and fast data processing that these devices provide. In particular, laser sensors are mostly applied in weld tracking research, where they have been developed since simple systems such as the one by Fernandez et al. [8] that implements a low-cost laser vision system based on a webcam on the robot arm oriented toward the laser stripe projected at a 45 angle, up to systems already proved in the industrial context, for example, the study by Liu et al. [9], in which an autonomous method is proposed to find the initial weld position for a fillet weld seam formed by two steel plates. This method employs an automatic dynamic programming-based laser light inflection point extraction algorithm. The algorithm for this method can support factors induced by natural light that may be present during the processing of laser vision images. Disturbances in laser systems on metallic surfaces are a common problem in weld bead localization. Li et al. [10] suggest reducing the influence of noise on the extraction of the centerline, through the double-threshold recursive least square method. Later, an automatic welding seam recognition and tracking method by utilizing structured light vision to search through a Kalman filter the profile of the welding seam in a small area, aiming to avoid some disturbances [10]. Another approach in structured light systems incorporated an optical filter and LED lighting developed to reduce the effect of noise produced by the arc torch. Where a fuzzy-PID controller can be used to obtain the weld seam in horizontal and vertical directions simultaneously [11]. Recent systems tend to be more robust or complex in terms of the number of tools involved in obtaining images, filtering data. For example, Zeng et al. [12] propose a weld position recognition method based on directional light and structured light information fusion during multi-layer/multi-pass welding. On other hand, Guo et al. [13] present a multifunctional monocular visual sensor based on combined laser structured lights, which has the functions such as the detection of the welding groove cross-sectional parameters, application for the joint tracking, the detection of the welding torch height, the measuring of the weld bead appearance, and the monitoring of welding process in real time. Other approaches for real-time processing are described by Kos et al. [14] to compute the position of the laser beam and the seam in 3D during welding with a camera and illumination laser in order to equalize the brightness of the keyhole and the surrounding area. Zhang et al. [15] acquire the 3D information by multiple segment laser scanning. The weld features are extracted by cubic smoothing spline, to detect the characteristic parameters of weld lap joint with a deviation lower than 0.4 mm. Another research topic in robotic vision is systems that acquire images from two optical devices. In this sense, Chen et al. [16] propose a Canny detector, where the two parallel edges captured in a butt v-joint are used to fitting the value of the start welding position. In a similar way, Dinham et al. [17] use a Hough transform to detect the outside boundary of the weldment so that the background can be removed. In weld tracking systems, Ma et al. [18] use two normal charge-coupled device cameras to capture clear images from two directions—one is used to measure the root gap, and the other is used to measure the geometric parameters of the weld pool. Nowadays, owing to the precision of sensors and to have a complete understanding of the environment, 3D reconstruction techniques have been explored. In reconstruction with laser systems, 3D point cloud data are used to reconstruct welding seam, through the point cloud and guided by a neural network proposed by Xiao et al. [19], which can obtain Automation 2021, 2 254 the equations and initial points of the weld seam. The test results of the guidance prove that the extraction error is less than 0.6 mm, meeting actual production demand. In stereo vision, Yang et al. [20] propose a 3D path teaching method to improve the efficiency of teaching playback based on a stereo-structured light vision system using a seam extraction algorithm, which could achieve fast and accurate seam extraction to modify the model of the weld seam. Their system could well realize a fast and accurate 3D path teaching of a welding robot. Experiment results show a measurement resolution less than 0.7 mm and are suitable for V-type butt joint before welding [21]. In point clouds acquired with RGB-D sensors, Maiolino et al. [22] use an ASUS Xtion sensor to register and integrate the point cloud with the CAD model to perform an offline programming system for sealant dispensing robot. On the other hand, Zhou et al. [23] use an Intel family camera to detect and generate the trajectory with an algorithm based on the gradient of the edge intensity in the point cloud. However, the main limitation of the proposals found in the literature is that they seek to find a solution to a particular type of weld seam. Global path extraction systems are in the process of development; therefore, we find that the integration of color and segmentation of these data have not been the subject of research in welding robotics as a global acquisition system. In this work, a color point cloud segmentation method was implemented to extract 3D paths for robot trajectory generation. The developed system consists of a RealSense D435 sensor, a low-cost device that incorporates technologies such as stereo vision and RGB sensor, with which the 3D reconstruction of a point cloud that incorporates the color of the work object, with this color information a series of filters are applied in the HSV color space to segment the region of interest where the weld bead is expected to be applied. Once captured the zone, a spline cubic interpolation is executed to calculate the path that smoothest the trajectory of the welding points that would require a robotic manipulator. The rest of this paper is organized as follows: Section 2 describes the theory related to vision systems and algorithms to perform our 3D reconstruction processing seam extraction. Section 3 introduces the configuration of our experiment platform and vision sensor, and the results are presented in Section 4. Finally, in Section 5, concluding remarks are provided. 2. Materials and Methods 2.1. Stereo Vision Arrangements that consist of two image sensors (cameras) separated by a known distance are known as stereo systems. The principle of stereoscopy is based on the ability of the human brain to estimate the depth of objects present in the images captured by eyes [24]. In the stereoscopic configuration, two cameras are placed close to each other with parallel optical axes. Both cameras, with centers C and C separated by a distance L R B, called the baseline, have the same focal length f so that the left and right images are in parallel planes. A point in the three-dimensional space P will be projected in different positions, p and p with coordinates (x , y ) and (x , y ), respectively, of the planes of L R L L R R the images because it is seen from slightly different angles. This difference in position is known as disparity and is mathematically described as disparity = x x and is used to L R calculate the z distance in (1) through the geometric relationship [25], as shown in Figure 1. B  f z = (1) disparity Automation 2021, 2, FOR PEER REVIEW 4 Automation 2021, 2 255 Figure 1. Geometric relationship of a stereo camera configuration. The 3D image of target scene at Figure 1. Geometric relationship of a stereo camera configuration. The 3D image of target scene at point P. point P. 2.2. Structured Light 2.2. Structured Light Structured light is an active method to improve depth acquisition by using an external Structured light is an active method to improve depth acquisition by using an light source that provides additional information to the system. Structured light is based external light source that provides additional information to the system. Structured light on the use of active illumination of the scene with a specially designed 2D spatially varying is based on the use of active illumination of the scene with a specially designed 2D intensity pattern, where the camera sensor searches for artificially projected features that spatially varying intensity pattern, where the camera sensor searches for artificially serve as additional information for triangulation [26]. In the present work proposal, the projected features that serve as additional information for triangulation [26]. In the RealSense sensor has an optical projector that uses a pseudo-random binary array to present work proposal, the RealSense sensor has an optical projector that uses a pseudo- produce a grid-indexing strategy of dots. The array is defined by an n  n array encoded 1 2 random binary array to produce a grid-indexing strategy of dots. The array is defined by using a pseudo-random sequence, such that every k by k sub-window over the entire 1 2 array an n1 × is unique n2 array enc [27].oded using a pseudo-random sequence, such that every k1 by k2 sub- window over the entire array is unique [27]. 2.3. Point Cloud 2.3. Poi Depth nt Ccameras loud deliver depth images, in other words, images whose intensity values represent the depth of the point (x, y) in the scene. A point cloud is a data structure used Depth cameras deliver depth images, in other words, images whose intensity values to represent points with three dimensions (X, Y, Z), where the depth is represented by the represent the depth of the point (x, y) in the scene. A point cloud is a data structure used Z coordinate [28]. Once the depth images are available, it is possible to obtain the point to represent points with three dimensions (X, Y, Z), where the depth is represented by the cloud using the intrinsic values of the camera with which the information was acquired. Z coordinate [28]. Once the depth images are available, it is possible to obtain the point This process is known as deprojection; a point P with coordinates (X, Y, Z) can be obtained cloud using the intrinsic values of the camera with which the information was acquired. according to (2, 3, 4) from the depth information D being (x, y) the rectified position of x,y This process is known as deprojection; a point P with coordinates (X, Y, Z) can be obtained the pixel in the sensor, where the variables c , c , f , and f are the intrinsic values of the x y x y according to (2, 3, 4) from the depth information Dx,y being (x, y) the rectified position of camera used to acquire the information, with (f , f ) as the components of the focal length x y the pixel in the sensor, where the variables cx, cy, fx, and fy are the intrinsic values of the and (c , c ) the image projection center [29]. x y camera used to acquire the information, with (fx, fy) as the components of the focal length and (cx, cy) the image projection center [29]. D (C x) x,y x X = (2) D C x , f X= (2) D C y x,y y Y = (3) D y C y (3) Y= Z = D (4) x,y Z=D 2.4. Colored Point Cloud (4) Some 3D sensors are often coupled to an RGB camera, with which research on color depth registration is being carried out. Registering two cameras means knowing the 2.4. Colored Point Cloud relative position and orientation of one scene with respect to another [30]. In principle, Some 3D sensors are often coupled to an RGB camera, with which research on color color integration consists of reprojecting each 3D point onto the RGB image to adopt its depth registration is being carried out. Registering two cameras means knowing the relative color. When reprojected in 3D, the generated point cloud contains six information fields— position and orientation of one scene with respect to another [30]. In principle, color three for spatial coordinates and three with color values. However, due to occlusion, not all integration consists of reprojecting each 3D point onto the RGB image to adopt its color. When reconstructed 3D points in the scene are visible from the RGB camera, so some points may reprojected in 3D, the generated point cloud contains six information fields—three for spatial lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. Automation 2021, 2, FOR PEER REVIEW 5 coordinates and three with color values. However, due to occlusion, not all reconstructed 3D Automation 2021, 2, FOR PEER REVIEW 5 points in the scene are visible from the RGB camera, so some points may lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. Automation 2021, 2 coordinates and three with color values. However, due to occlusion, not all reconstructed 3 256 D points in the scene are visible from the RGB camera, so some points may lack color information [31]. Figure 2 shows the result of the colorization of the point cloud. (a) (b) (a) (b) (c) (c) Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored point cloud through RGB registration. Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored Figure 2. Point cloud color map registration: (a) depth information; (b) color information; (c) colored point cloud through RGB registration. point cloud through RGB registration. 3. Experimental Setup 33. . Ex Experimental perimental Se Setup tup The integrated vision system incorporates an RGB-D camera (Intel RealSense D435), The The iintegrated ntegrated vi vision sion sy system stem iincorporates ncorporates aan n R RGB-D GB-D ccamera amera ((Intel Intel R RealSense ealSense D43 D435), 5), an active stereo depth camera to compute the stereo depth data for real time. It also has an active stereo depth camera to compute the stereo depth data for real time. It also has an active stereo depth camera to compute the stereo depth data for real time. It also has an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is an optional infrared (IR) projector that assists in improving depth accuracy. The sensor is physically supported on a test arm that will allow an image acquisition from a top view physically supported on a test arm that will allow an image acquisition from a top view physically supported on a test arm that will allow an image acquisition from a top view of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as of the work object at a distance ranging from 30 to 70 cm over the welding work zone, as shown in Figure 3. shown in Figure 3. shown in Figure 3. Figure 3. Experimental setup: camera mounte d on a pedestal with top view of the working object. Figure 3. Experimental setup: camera mounted on a pedestal with top view of the working object. Figure 3. Experimental setup: camera mounted on a pedestal with top view of the working object. Automation 2021, 2 257 Automation 2021, 2, FOR PEER REVIEW 6 The proposed robotic system consists of an RGB-D camera that captures the surface The proposed robotic system consists of an RGB-D camera that captures the surface point cloud of the workpiece, the welding seam detection algorithm that locates the color point cloud of the workpiece, the welding seam detection algorithm that locates the color seam region in the input point cloud, and the trajectory generation method that processes seam region in the input point cloud, and the trajectory generation method that processes the point set and outputs a 3D welding trajectory. the point set and outputs a 3D welding trajectory. The image acquisition and trajectory planning algorithms implementation were car- The image acquisition and trajectory planning algorithms implementation were ried out on a personal computer with the Windows 10 operating system and operating carried out on a personal computer with the Windows 10 operating system and operating with an Intel i7 CPU @ 2.40 GHz, with the necessary USB 3.0 ports required for the commu- with an Intel i7 CPU @ 2.40 GHz, with the necessary USB 3.0 ports required for the nication with the RealSense D435 camera. communication with the RealSense D435 camera. 3.1. Test Sample 3.1. Test Sample A test object was designed so that the geometric characteristics of the part could be A test object was designed so that the geometric characteristics of the part could be mathematically parametrized. It consists of two parts designed both as a semi-complex mathematically parametrized. It consists of two parts designed both as a semi-complex surface surface with curvature as well as to simulate a V-type welded joint, one of the most with curvature as well as to simulate a V-type welded joint, one of the most investigated in investigated in the literature, with a depth of 5 mm and an angular opening of 90 . The the literature, with a depth of 5 mm and an angular opening of 90°. The assembly of these two assembly of these two pieces results in a test piece of 20  10 cm having 4.8 cm at its highest pieces results in a test piece of 20 × 10 cm having 4.8 cm at its highest part. part. The CAD mo The CAD models dels oof f FFigur iguree 4 show 4 show the d the design esign of the t of the test est piece th piece that at was wasfabr fabricated icated in in al aluminum uminum 6 6061 061 T T6,6,conside consider ring ing t thathaluminum at aluminum i is a highly s a hmoldable ighly moldab and rle eflective and refl material, ective which could serve as a parameter to measuring light disturbances in the vision system. It material, which could serve as a parameter to measuring light disturbances in the vision is important to note that the sample part was machined with tungsten carbide milling tools system. It is important to note that the sample part was machined with tungsten carbide whose toolpaths were programmed in WorkNC CAM software; the machining parameters milling tools whose toolpaths were programmed in WorkNC CAM software; the are listed in Table 1. The machining was performed on a HAAS VF3 CNC machine, to machining parameters are listed in Table 1. The machining was performed on a HAAS match the part to the CAD model, because machines such as these report positioning errors VF3 CNC machine, to match the part to the CAD model, because machines such as these below 0.05 mm. report positioning errors below 0.05 mm. (a) (b) (c) (d) Figure 4. The model of the curved V-type butt joint with a red marker: ( a) front view; ( b) right Figure 4. The model of the curved V-type butt joint with a red marker: (a) front view; (b) right view; view; ( c) top view; (d) isometric view. (c) top view; (d) isometric view. Automation 2021, 2 258 Automation 2021, 2, FOR PEER REVIEW 7 Table 1. Machining parameters for the workpiece manufacture. Milling parameters: Vc = cutting Table 1. Machining parameters for the workpiece manufacture. Milling parameters: Vc = cutting speed, RPM = spindle revolution per minute; F = feed rate. speed, RPM = spindle revolution per minute; F = feed rate. Tool Path Tool- Vc (m/min) RPM F (mm) Tool Path Tool- Vc (m/min) RPM F (mm) Facing Facer 2.5” 650 3500 300 Facing Facer 2.5” 650 3500 300 Pocketing Flat 0.25” 120 6000 7 Pocketing Flat 0.25” 120 6000 7 Drilling Drill 0.203” 50 3048 6 Drilling Drill 0.203” 50 3048 6 Tangent to curve Flat 1.0” 350 4500 40 Tangent to curve Flat 1.0” 350 4500 40 Wall machining Flat 0.5” 200 5000 30 Wall machining Flat 0.5” 200 5000 30 Z level Flat 0.437” 250 5500 47 Z level Flat 0.437” 250 5500 47 Z finishing Z finishing Ball 0.25” Ball 0.25” 100 100 6000 6000 32 32 3.2. Trajectory Extraction Based on Stereo Vision System Embedding Color Data 3.2. Trajectory Extraction Based on Stereo Vision System Embedding Color Data Figure 5 shows the steps necessary for the definition of parameters and processing of Figure 5 shows the steps necessary for the definition of parameters and processing of the images that will carry out the extraction of the points corresponding to the weld bead. the images that will carry out the extraction of the points corresponding to the weld bead. Next, the objective of each block was defined. Next, the objective of each block was defined. Figure 5. The flowchart of path extraction. Figure 5. The flowchart of path extraction. Set up the data acquisition parameters: Image acquisition and processing was Set up the data acquisition parameters: Image acquisition and processing was per- performed by Intel SDK [32], which, as an open source software program, has support for formed by Intel SDK [32], which, as an open source software program, has support for dif different pro ferent programming gramming language languages, s, suc such h as python, t as python, thr hrough the p ough the pyr yrealsen ealsense2 se2 library, the library, the of official python wrapper ficial python wrapper.. Since Since the imp the implemented lemented visio vision n system system has has dif different ferent sensors, sensors, both both color and depth sensors were set to a resolution of 640  480 pixels and a frame rate of color and depth sensors were set to a resolution of 640 × 480 pixels and a frame rate of 30 30 fps, with a depth accuracy between 0.1 to 1 mm. fps, with a depth accuracy between 0.1 to 1 mm. Acquire and align depth and color frame information: It is necessary to align the Acquire and align depth and color frame information: It is necessary to align the depth and color frames to make a 3D reconstruction faithful to the captured scene. This depth and color frames to make a 3D reconstruction faithful to the captured scene. This was achieved through the pyrealsense2 library [32], which has an algorithm that aligns the was achieved through the pyrealsense2 library [32], which has an algorithm that aligns depth image with another image, in this case, the color image. the depth image with another image, in this case, the color image. Segment and remove the background data: Sometimes, we seek to process a region Segment and remove the background data: Sometimes, we seek to process a region of interest (ROI); in this case, the ROI is defined by the distance at which the test object of interest (ROI); in this case, the ROI is defined by the distance at which the test object is is located relative to the camera. Therefore, we first planned a filter using one of the located relative to the camera. Therefore, we first planned a filter using one of the device’s device’s own tools to acquire the images [32], where a depth clipping distance in which own tools to acquire the images [32], where a depth clipping distance in which all all information beyond our ROI was segmented and removed instead of using all the information beyond our ROI was segmented and removed instead of using all the information in the scene. information in the scene. Point-cloud calculation from depth and color-aligned frames: The pyrealsense2 li- Point-cloud calculation from depth and color-aligned frames: The pyrealsense2 brary [32] was used to calculate the point cloud since it has the intrinsic values of the stereo library [32] was used to calculate the point cloud since it has the intrinsic values of the vision system and can perform the calculations for the point cloud acquisition, in addition stereo vision system and can perform the calculations for the point cloud acquisition, in to registering the color of the aligned frame. addition to registering the color of the aligned frame. Color segmentation: This block represents the core of the proposed methodology that Color segmentation: This block represents the core of the proposed methodology that segment the welding area from the rest of the surface, in which the image was preprocessed segment the welding area from the rest of the surface, in which the image was considering the brightness of the scene to binarize the color image and look for the threshold preprocessed considering the brightness of the scene to binarize the color image and look at which a single frame of the point-cloud was vectorized to an XYZRGB format to using for the threshold at which a single frame of the point-cloud was vectorized to an XYZRGB the Numpy and OpenCV library tools. However, to improve the selection of the points of format to using the Numpy and OpenCV library tools. However, to improve the selection interest, a change in the color space to hue saturation value (HSV) was used. The threshold of the points of interest, a change in the color space to hue saturation value (HSV) was was applied to the hue channel to find the color region, as well as to the saturation channel used. The threshold was applied to the hue channel to find the color region, as well as to as a parameter for brightness. the saturation channel as a parameter for brightness. Trajectory planning: In order to calculate the trajectory from the color market seg- Trajectory planning: In order to calculate the trajectory from the color market segmented mented data in the previous module, following the methodology of Zhang et al. [15], a data in the previous module, following the methodology of Zhang et al. [15], a cubic B-spline cubic B-spline interpolation algorithm was implemented to approximate the nonlinear interpolation algorithm was implemented to approximate the nonlinear dataset, the function dataset, the function was divided by knot points, and between the knots, the subset of data was divided by knot points, and between the knots, the subset of data points a 5th order points a 5th order polynomial curve was applied to satisfies a smoothness requirement to polynomial curve was applied to satisfies a smoothness requirement to the target weld seam Automation 2021, 2 259 Automation 2021, 2, FOR PEER REVIEW 8 the target weld seam points. It was planned that the trajectory would be smooth enough to be applied directly to the robot through a transformation matrix referenced to the welding points. It was planned that the trajectory would be smooth enough to be applied directly to direction. the robot through a transformation matrix referenced to the welding direction. 3.3. 3D Reconstruction with RealSense D435 Sensor 3.3. 3D Reconstruction with RealSense D435 Sensor Before an in-depth analysis of the results of the trajectory extraction by the proposed Before an in-depth analysis of the results of the trajectory extraction by the proposed algorithm, a study of the proposed vision system is necessary to evaluate the performance algorithm, a study of the proposed vision system is necessary to evaluate the performance of the RealSense camera. We proceeded to execute the methodology described by Carfagni of the RealSense camera. We proceeded to execute the methodology described by Carfagni et al. [33], which evaluates the reconstruction capability of D415 and SR300 sensors, seeking et al. [33], which evaluates the reconstruction capability of D415 and SR300 sensors, to measure the error with which the sensor can reconstruct a surface. To this end, the seeking to measure the error with which the sensor can reconstruct a surface. To this end, RealSense D435 camera was located 30 cm away at the top of a flat surface where the test the RealSense D435 camera was located 30 cm away at the top of a flat surface where the piece was placed. With this configuration, the 3D reconstruction of the surface was carried test piece was placed. With this configuration, the 3D reconstruction of the surface was out through the first three blocks of the algorithm presented in the previous section to carried out through the first three blocks of the algorithm presented in the previous finally obtain the point cloud of the test piece. section to finally obtain the point cloud of the test piece. The real point cloud of the test piece was generated by the CAD model exporting The real point cloud of the test piece was generated by the CAD model exporting the the pieces to a Polygon File Format (.ply), as shown in Figure 6. Once we had the target pieces to a Polygon File Format (.ply), as shown in Figure 6. Once we had the target surface surface and the one calculated by the camera, we proceeded to run an ICP color registration and the one calculated by the camera, we proceeded to run an ICP color registration algorithm [30] with which we could estimate the Euclidean point distance between the algorithm [30] with which we could estimate the Euclidean point distance between the target and the 3D reconstruction surface. target and the 3D reconstruction surface. (a) (b) Figure 6. 3D Reconstruction evaluation: (a) target point cloud; (b) the result of ICP color registration Figure 6. 3D Reconstruction evaluation: (a) target point cloud; (b) the result of ICP color registration between target and 3D. between target and 3D. 4. Results 4. Results 4.4.1. 1. RRealSense ealSense D4 D435 35 3D3D ReReconstruction construction PerPerformance formance The RealSense D435 camera was evaluated following the methodology described in The RealSense D435 camera was evaluated following the methodology described in Section 3.3. Figure 6 shows the result of the registration between both point clouds that Section 3.3. Figure 6 shows the result of the registration between both point clouds that provide the distance between the points of the 3D reconstruction to the closest point on provide the distance between the points of the 3D reconstruction to the closest point on the the target surface. Three tests were carried out, and the results are shown in Table 2, in target surface. Three tests were carried out, and the results are shown in Table 2, in which which the computed average distance and standard deviation are listed. the computed average distance and standard deviation are listed. Table 2. RealSense D435 evaluation to perform a 3D reconstruction. Table 2. RealSense D435 evaluation to perform a 3D reconstruction. Average Standard Deviation Average Standard Deviation Test 1 0.704 mm 0.378 mm Test 1 0.704 mm 0.378 mm Test 2 1.053 mm 0.623 mm Test 2 1.053 mm 0.623 mm Test 3 1.284 mm 0.738 mm Test 3 1.284 mm 0.738 mm 4.2. Trajectory Extraction of the Weld Bead by Colorimetry Point Cloud Segmentation 4.2. Trajectory Extraction of the Weld Bead by Colorimetry Point Cloud Segmentation The RealSense sensor, by default, provides the color information of the scene in an The RealSense sensor, by default, provides the color information of the scene in an RGB RGB color space (RED, GREEN, BLUE), in the range of 0 to 255. To carry out this color space (RED, GREEN, BLUE), in the range of 0 to 255. To carry out this experiment, experiment, the color markers used in this color segmentation study were made in these the color markers used in this color segmentation study were made in these primary colors. primary colors. However, as mentioned before, the segmentation was performed by However, as mentioned before, the segmentation was performed by applying a threshold Automation 2021, 2, FOR PEER REVIEW 9 Automation 2021, 2 260 applying a threshold in the HSV color space channels. Table 3 shows the thresholds in the HSV color space channels. Table 3 shows the thresholds applied to achieve the applied to achieve the segmentation of each color marker. segmentation of each color marker. Table 3. Table 3.Color Color threshold for threshold for point cloud point cloud segmentation segmentation by colorimetry. by colorimetry. Hue Hue SaturationSaturation Red 160–180 100–255 Red 160–180 100–255 Green 30–50 100–255 Green 30–50 100–255 Blue 110–120 50–255 Blue 110–120 50–255 Figure 7 shows the result of generating the point cloud of the test piece to which a Figure 7 shows the result of generating the point cloud of the test piece to which a red color marker was applied in the weld zone—on the left is the target point cloud with red color marker was applied in the weld zone—on the left is the target point cloud with color information in HSV color space, while the image on the right shows the result of the color information in HSV color space, while the image on the right shows the result of the segmentation of the weld bead by applying the color filter to the point cloud. segmentation of the weld bead by applying the color filter to the point cloud. (a) (b) (c) (d) Figure 7. Color Segmentation: (a) RGB image; (b) image with HSV transformation; (c) point cloud Figure 7. Color Segmentation: (a) RGB image; (b) image with HSV transformation; (c) point cloud with HSV data; (d) points of seam filter by color segmentation. with HSV data; (d) points of seam filter by color segmentation. 4.3. Testing Trajectory Extraction of a V-Type Butt Joint 4.3. Testing Trajectory Extraction of a V-Type Butt Joint In this stage, the experimentation of the algorithm was implemented in its totality, as In this stage, the experimentation of the algorithm was implemented in its totality, as presented in Section 3.2, in which once the intended zone for applying the weld bead was presented in Section 3.2, in which once the intended zone for applying the weld bead was captured, the algorithm implements a spline cubic interpolation, which calculates the path captured, the algorithm implements a spline cubic interpolation, which calculates the path that smoothest the planning of the welding points that would require a robotic manipulator. that smoothest the planning of the welding points that would require a robotic manipulator. Figure 8 shows the smooth computed trajectory over the reconstructed surface. Figure 8 shows the smooth computed trajectory over the reconstructed surface. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest Euclidean distance between the two trajectories. Finally, the RMSE error of each of the points between the target trajectory and the computed trajectory were calculated to verify the fitting results. Both trajectories are shown in Figure 9. Automation 2021, 2, FOR PEER REVIEW 10 Automation 2021, 2 261 Automation 2021, 2, FOR PEER REVIEW 10 (a) (b) Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the computed path. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest (a) (b) Euclidean distance between the two trajectories. Finally, the RMSE error of each of the Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the Figure 8. V-type butt joint trajectory extraction: (a) point cloud with HSV data; (b) surface with the points between the target trajectory and the computed trajectory were calculated to verify computed path. computed path. the fitting results. Both trajectories are shown in Figure 9. To evaluate the calculated trajectory, the points of the target path were obtained from the test piece designed in SolidWorks software and then compared with the trajectory calculated, using the ICP algorithm and filtering the target points that have the smallest Euclidean distance between the two trajectories. Finally, the RMSE error of each of the points between the target trajectory and the computed trajectory were calculated to verify the fitting results. Both trajectories are shown in Figure 9. Figure 9. Workpiece target path vs. computed trajectory. Figure 9. Workpiece target path vs. computed trajectory. Table 4 shows the RMSE values in three tests that were conducted, where an offset Table 4 shows the RMSE values in three tests that were conducted, where an offset with the surface Z axis can be observed. Comparing these results with the work of Yang with the surface Z axis can be observed. Comparing these results with the work of Yang et al. [21], we can infer that in some tests, we have comparable results in the Z error; et al. [21], we can infer that in some tests, we have comparable results in the Z error; Figure 9. however Workpiece t , the errorarange rget path vs. computed is higher, oscillating trajectory. between 1.15 and 0.75 mm. however, the error range is higher, oscillating between 1.15 and 0.75 mm. Table 4 shows the RMSE values in three tests that were conducted, where an offset Table 4. Trajectory RMSE error for V-type butt joint. with the surfa Table 4. Trajectory RMSE err ce Z axis can be observed. Comp or for V-type butt joint. aring these results with the work of Yang X Y Z et al. [21], we can infer that in some tests, we have comparable results in the Z error; X Y Z Test 1 0.063 mm 0. 184 mm 0.952 mm however, the error range is higher, oscillating between 1.15 and 0.75 mm. Test 2 0.046 mm 0.195 mm 1.059 mm Test 1 0.063 mm 0. 184 mm 0.952 mm Test 3 0.010 mm 0.145 mm 0.739 mm Table 4. Trajectory RMSE error for V-type butt joint. Test 2 0.046 mm 0.195 mm 1.059 mm Test 3 0.010 mm 0.145 mm 0.739 mm X Y Z To have another control parameter in the results, we proceeded to calculate the Test 1 0.063 mm 0. 184 mm 0.952 mm Euclidean distance between the calculated trajectory and the desired CAD model trajectory. To have another control parameter in the results, we proceeded to calculate the Test 2 0.046 mm 0.195 mm 1.059 mm Table 5 shows a dispersion of the points in the trajectory with a standard deviation less Euclidean distance between the calculated trajectory and the desired CAD model Test 3 0.010 mm 0.145 mm 0.739 mm than 0.5 mm. To have another control parameter in the results, we proceeded to calculate the Euclidean distance between the calculated trajectory and the desired CAD model Automation 2021, 2, FOR PEER REVIEW 11 Automation 2021, 2 262 trajectory. Table 5 shows a dispersion of the points in the trajectory with a standard deviation less than 0.5 mm. Table 5. Average and standard deviation between CAD and computed trajectory for V-type butt joint. Table 5. Average and standard deviation between CAD and computed trajectory for V-type butt joint. Average Standard Deviation Average Standard Deviation Test 1 0.70 mm 0.30 mm Test 1 0.70 mm 0.30 mm Test 2 0.80 mm 0.30 mm Test 2 0.80 mm 0.30 mm Test 3 0.80 mm 0.30 mm Test 3 0.80 mm 0.30 mm 4.4. Testing Trajectory Extraction of a Straight Butt Joint 4.4. Testing Trajectory Extraction of a Straight Butt Joint Straight shape is a basic welding joint type commonly used in the industry, so a Straight shape is a basic welding joint type commonly used in the industry, so a straight butt joint was constructed with a length of 20 cm and an inclination of 3° above straight butt joint was constructed with a length of 20 cm and an inclination of 3 above the the surface to demonstrate the flexibility of the system. Applying the previous algorithms, surface to demonstrate the flexibility of the system. Applying the previous algorithms, it was also possible to extract this trajectory. Figure 10 shows the tested surface reconstruction, it was also possible to extract this trajectory. Figure 10 shows the tested surface to which a straight blue line was applied, and the trajectory calculated over the point cloud reconstruction, to which a straight blue line was applied, and the trajectory calculated surface. over the point cloud surface. (a) (b) (c) Figure 10. Straight butt joint trajectory extraction: (a) workpiece; (b) point cloud with HSV data; (c) surface with the Figure 10. Straight butt joint trajectory extraction: (a) workpiece; (b) point cloud with HSV data; (c) surface with the computed path. computed path. Table 6 shows the RMSE values, the average, and standard deviation between the Table 6 shows the RMSE values, the average, and standard deviation between the calculated trajectory and the desired line model trajectory shown in Figure 11, within three calculated trajectory and the desired line model trajectory shown in Figure 11, within three tests that were conducted. Similar findings to previous results in RMSE and standard tests that were conducted. Similar findings to previous results in RMSE and standard deviation show the flexibility of the system as a global acquisition system regardless of the deviation show the flexibility of the system as a global acquisition system regardless of workpiece. the workpiece. Table 6. Trajectory RMSE error for V-type butt joint. Table 6. Trajectory RMSE error for V-type butt joint. X Y Z Average Standard Deviation X Y Z Average Standard Deviation Test 1 0.142 mm 0.075 mm 0.683 mm 0.60 mm 0.20 mm Test 2 0.124 mm 0.072 mm 0.530 mm 0.50 mm 0.20 mm Test 1 0.142 mm 0.075 mm 0.683 mm 0.60 mm 0.20 mm Test 3 0.180 mm 0.069 mm 0.494 mm 0.50 mm 0.20 mm Test 2 0.124 mm 0.072 mm 0.530 mm 0.50 mm 0.20 mm Test 3 0.180 mm 0.069 mm 0.494 mm 0.50 mm 0.20 mm Automation 2021, 2 263 Automation 2021, 2, FOR PEER REVIEW 12 Figure 11. Target straight butt joint trajectory vs. computed trajectory. Figure 11. Target straight butt joint trajectory vs. computed trajectory. 5. Conclusions 5. Conclusions To improve the efficiency of programming welding robots, this study proposed a To improve the efficiency of programming welding robots, this study proposed a color point cloud segmentation system to extract 3D paths. The major conclusions are color point cloud segmentation system to extract 3D paths. The major conclusions are generalized as follows: generalized as follows: (1) A welding robot sensor based on stereo vision and RGB sensor was implemented in (1) A welding robot sensor based on stereo vision and RGB sensor was implemented in this paper that could finish the 3D color reconstruction task of welding workpiece, this paper that could finish the 3D color reconstruction task of welding workpiece, with a reconstruction standard deviation less than 1 mm, which is a parameter with a reconstruction standard deviation less than 1 mm, which is a parameter comparable to that shown by Carfagni [33] for similar devices. comparable to that shown by Carfagni [33] for similar devices. (2) In order to achieve quick and robust weld 3D path extraction, a color segmentation (2) In order to achieve quick and robust weld 3D path extraction, a color segmentation based on color point cloud reconstruction was performed, with thresholds in HSV based on color point cloud reconstruction was performed, with thresholds in HSV color space and an interpolation of the segmented points. The trajectory extraction color space and an interpolation of the segmented points. The trajectory extraction results show errors close to or below 1.1 mm for V-type butt joint and below 0.6 mm results show errors close to or below 1.1 mm for V-type butt joint and below 0.6 mm for a straight butt joint, comparable with other stereo vision studies; for example, for a straight butt joint, comparable with other stereo vision studies; for example, Yang et al. [20] show that the measurement resolution is less than 0.7 mm for V-type Yang et al. [20] show that the measurement resolution is less than 0.7 mm for V-type butt joint, and in contrast, Zhou et al. [23] show a pose accuracy RMSE of 0.8 mm for butt joint, and in contrast, Zhou et al. [23] show a pose accuracy RMSE of 0.8 mm for a cylinder butt joint using a RealSense D415 sensor. a cylinder butt joint using a RealSense D415 sensor. (3) In addition to the above, the adaptability of the proposed trajectory extraction system, (3) In addition to the above, the adaptability of the proposed trajectory extraction system, due to being a global capture system, shows results that encourage experimentation due to being a global capture system, shows results that encourage experimentation in in V-type welding as one of the more studied in the literature, but also in other types V-type welding as one of the more studied in the literature, but also in other types of of welding that would give a differential over most of the proposals found in the welding that would give a differential over most of the proposals found in the literature. literature. In the future, we aim to improve and complete our work. Firstly, we plan to conduct In the future, we aim to improve and complete our work. Firstly, we plan to conduct experiments experiments on on diffe different rent t test est piece pieces s and and dem demonstrate onstrate that that the proposed meth the proposed method od is is also also suitable suitable for for d difi fffer erent ent weld weld b beads. eadsIn . In addition, addition, we we seek seek to to analyze analyze and and extract extract the the t trajectory rajectory without without applying applyinga a co color lor marker marker ,, looking looking fo for r th the e sh shadows adows or or sh shines ines that that ar ar e ege generated nerated in inthe weld the welding ing region. region.Fin Finally ally, the , the me measur asurement precision ement precision need needss to to be be impr improve oveddwith with a a quality quality test of the proposed test of the proposed method method against a against a laser laser sensor. sensor. Author Contributions: Conceptualization, J.B.R.-S., E.C.-U. and A.G.-E.; methodology, J.B.R.-S., Author Contributions: Conceptualization, J.B.R.-S., E.C.-U., and A.G.-E.; methodology, J.B.R.-S., E.C.-U., J.A.E.C., R.L.S. and A.G.-E.; software, J.B.R.-S., J.A.E.C. and R.L.S.; validation J.B.R.-S.; formal E.C.-U., J.A.E.C., R.L.S., and A.G.-E.; software, J.B.R.-S., J.A.E.C., and R.L.S.; validation J.B.R.-S.; analysis, formal analysis, J J.B.R.-S., E.C.-U., .B.R.-S., J.A.E.C., E.C.-U., R.L.S. J.A.E. and C., A.G.-E.; R.L.S., and investigation, A.G.-E.; in J.B.R.-S.; vestigati ro esour n, J.B. ces, R.-S. J.B.R.-S. ; resourc and es, J.A.E.C.; data curation, J.B.R.-S.; writing—original draft preparation, J.B.R.-S.; writing—review and J.B.R.-S. and J.A.E.C.; data curation, J.B.R.-S.; writing—original draft preparation, J.B.R.-S.; editing, writing—review and e J.B.R.-S., E.C.-U., diting, J.A.E.C., J.B.R R.L.S. .-S., E.C and .-U A.G.-E.; ., J.A.E.C. visualization, , R.L.S., and A.G J.B.R.-S.; .-E.; vsupervision, isualization, J. E.C.-U. B.R.-S.; and supervision, A.G.-E.; pr Eoject .C.-U. and administration, A.G.-E.; project a J.B.R.-S. dministra and A.G.-E.; tion, J funding .B.R.-S. and acquisition, A.G.-E.;A.G.-E. funding acqu All authors isition, have A.Gr .-E ead . All and au agr thors have read eed to the published and agreed to version th of e publishe the manuscript. d version of the manuscript. Funding: Funding: This This r esear research rece ch received ived no no external external funding. funding. Automation 2021, 2 264 Data Availability Statement: The study did not report any data. Acknowledgments: Authors would like to acknowledge the support of Tecnologico de Monterrey and the financial support from CONACyT for the MSc studies of one of the authors (J.B.R.-S.). Conflicts of Interest: The authors declare no conflict of interest. References 1. Ogbemhe, J.; Mpofu, K. Towards achieving a fully intelligent robotic arc welding: A review. Ind. Robot Int. J. 2015, 42, 475–484. [CrossRef] 2. Pan, Z.; Polden, J.; Larkin, N.; Van Duin, S.; Norrish, J. Recent progress on programming methods for Industrial Robots. Robot. Comput. Integr. Manuf. 2012, 28, 87–94. [CrossRef] 3. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review. Sensors 2016, 16, 335. [CrossRef] 4. Lei, T.; Rong, Y.; Wang, H.; Huang, Y.; Li, M. A review of vision-aided robotic welding. Comput. Ind. 2020, 123, 103326. [CrossRef] 5. Kiddee, P.; Fang, Z.; Tan, M. Visual recognition of the initial and end points of lap joint for welding robots. In 2014 IEEE International Conference on Information and Automation (ICIA); IEEE: Piscataway, NJ, USA, 2014. [CrossRef] 6. Ye, Z.; Fang, G.; Chen, S.; Dinham, M. A robust algorithm for weld seam extraction based on prior knowledge of weld seam. Sens. Rev. 2013, 33, 125–133. [CrossRef] 7. Yang, L.; Li, E.; Long, T.; Fan, J.; Mao, Y.; Fang, Z.; Liang, Z. A welding quality detection method for arc welding robot based on 3D reconstruction with SFS algorithm. Int. J. Adv. Manuf. Technol. 2017, 94, 1209–1220. [CrossRef] 8. Villan, A.F.; Acevedo, R.G.; Alvarez, E.A.; Campos-Lopez, A.M.; Garcia-Martinez, D.F.; Fernandez, R.U.; Meana, M.J.; Sanchez, J.M.G. Low-cost system for weld tracking based on artificial vision. IEEE Trans. Ind. Appl. 2011, 47, 1159–1167. [CrossRef] 9. Liu, F.Q.; Wang, Z.Y.; Ji, Y. Precise initial weld position identification of a fillet weld seam using laser vision technology. Int. J. Adv. Manuf. Technol. 2018, 99, 2059–2068. [CrossRef] 10. Li, X.; Li, X.; Ge, S.S.; Khyam, M.O.; Luo, C. Automatic welding Seam tracking and identification. IEEE Trans. Ind. Electron. 2017, 64, 7261–7271. [CrossRef] 11. Fan, J.; Jing, F.; Yang, L.; Teng, L.; Tan, M. A precise initial weld point guiding method of micro-gap weld based on structured light vision sensor. IEEE Sens. J. 2019, 19, 322–331. [CrossRef] 12. Zeng, J.; Chang, B.; Du, D.; Wang, L.; Chang, S.; Peng, G.; Wang, W. A Weld Position Recognition Method Based on Directional and Structured Light Information Fusion in Multi-Layer/Multi-Pass Welding. Sensors 2018, 18, 129. [CrossRef] [PubMed] 13. Guo, J.; Zhu, Z.; Sun, B.; Yu, Y. A novel multifunctional visual sensor based on combined laser structured lights and its anti-jamming detection algorithms. Weld. World 2018, 63, 313–322. [CrossRef] 14. Kos, M.; Arko, E.; Kosler, H.; Jezeršek, M. Remote laser welding with in-line adaptive 3D seam tracking. Int. J. Adv. Manuf. Technol. 2019, 103, 4577–4586. [CrossRef] 15. Zhang, K.; Yan, M.; Huang, T.; Zheng, J.; Li, Z. 3D reconstruction of complex spatial weld seam for autonomous welding by laser structured light scanning. J. Manuf. Process. 2019, 39, 200–207. [CrossRef] 16. Chen, X.Z.; Chen, S.B. The autonomous detection and guiding of start welding position for arc welding robot. Ind. Robot Int. J. 2010, 37, 70–78. [CrossRef] 17. Dinham, M.; Fang, G. Autonomous weld seam identification and localisation using eye-in-hand stereo vision for robotic arc welding. Robot. Comput. Integr. Manuf. 2013, 29, 288–301. [CrossRef] 18. Ma, H.; Wei, S.; Lin, T.; Chen, S.; Li, L. Binocular vision system for both weld pool and root gap in robot welding process. Sens. Rev. 2010, 30, 116–123. [CrossRef] 19. Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators A Phys. 2019, 297, 111533. [CrossRef] 20. Yang, L.; Li, E.; Long, T.; Fan, J.; Liang, Z. A novel 3-d path extraction method for arc welding robot based on stereo structured light sensor. IEEE Sens. J. 2019, 19, 763–773. [CrossRef] 21. Yang, L.; Liu, Y.; Peng, J.; Liang, Z. A novel system for off-line 3D seam extraction and path planning based on point cloud segmentation for arc welding robot. Robot. Comput. Integr. Manuf. 2020, 64, 101929. [CrossRef] 22. Maiolino, P.; Woolley, R.; Branson, D.; Benardos, P.; Popov, A.; Ratchev, S. Flexible robot sealant dispensing cell using RGB-D sensor and off-line programming. Robot. Comput. Integr. Manuf. 2017, 48, 188–195. [CrossRef] 23. Zhou, P.; Peng, R.; Xu, M.; Wu, V.; Navarro-Alarcon, D. Path planning with automatic seam extraction over point cloud models for robotic arc welding. IEEE Robot. Autom. Lett. 2021, 6, 5002–5009. [CrossRef] 24. Tippetts, B.; Lee, D.J.; Lillywhite, K.; Archibald, J. Review of stereo vision algorithms and their suitability for resource-limited systems. J. Real-Time Image Process. 2013, 11, 5–25. [CrossRef] 25. Ke, F.; Liu, H.; Zhao, D.; Sun, G.; Xu, W.; Feng, W. A high precision image registration method for measurement based on the stereo camera system. Optik 2020, 204, 164186. [CrossRef] 26. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [CrossRef] 27. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128. [CrossRef] Automation 2021, 2 265 28. Bi, Z.M.; Wang, L. Advances in 3D data acquisition and processing for industrial applications. Robot. Comput. Integr. Manuf. 2010, 26, 403–413. [CrossRef] 29. Laganiere, R.; Gilbert, S.; Roth, G. Robust object pose estimation from feature-based stereo. IEEE Trans. Instrum. Meas. 2006, 55, 1270–1280. [CrossRef] 30. Park, J.; Zhou, Q.-Y.; Koltun, V. Colored point cloud registration revisited. In 2017 IEEE International Conference on Computer Vision (ICCV); IEEE: Piscataway, NJ, USA, 2017. [CrossRef] 31. Huang, X.; Zhang, J.; Wu, Q.; Fan, L.; Yuan, C. A coarse-to-fine algorithm for matching and registration in 3d CROSS-SOURCE point clouds. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2965–2977. [CrossRef] 32. Grunnet-Jepsen, A.; Tong, D. Depth Post-Processing FOR Intel®REALSENSE™ Depth Camera D400 Series. Available online: https://dev.intelrealsense.com/docs/depth-post-processing (accessed on 12 September 2021). 33. Carfagni, M.; Furferi, R.; Governi, L.; Santarelli, C.; Servi, M.; Uccheddu, F.; Volpe, Y. Metrological and Critical Characterization of the Intel D415 Stereo Depth Camera. Sensors 2019, 19, 489. [CrossRef]

Journal

AutomationMultidisciplinary Digital Publishing Institute

Published: Nov 5, 2021

Keywords: path model; 3D reconstruction; seam extraction; RGB-D; color segmentation; stereo structured light

There are no references for this article.