Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement

Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning... applied sciences Article Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement 1 2 1 , 3 , 3 Yan Li , Yuyong Ma , Ye Tao * and Zhengmeng Hou School of Manufacturing Science and Engineering, Sichuan University, Chengdu 610065, China; liyan@scu.edu.cn Aerospace Research Institute of Materials and Processing Technology, China Academy of Launch Vehicle Technology, Beijing 100076, China; myy007391@casctx-1.net.cn Energy Research Center of Lower Saxony (EFZN), 38640 Goslar, Germany; hou@tu-clausthal.de * Correspondence: yetao@scu.edu.cn; Tel.: +49-0152-5155-4369 Received: 23 October 2018; Accepted: 3 December 2018; Published: 10 December 2018 Featured Application: On-line point cloud data compression process for 3D free-form surface contact or non-contact scanning measuring equipment. Abstract: In order to obtain a highly accurate profile of a measured three-dimensional (3D) free-form surface, a scanning measuring device has to produce extremely dense point cloud data with a great sampling rate. Bottlenecks are created owing to inefficiencies in manipulating, storing and transferring these data, and parametric modelling from them is quite time-consuming work. In order to effectively compress the dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents an innovative methodology of an on-line point cloud data compression algorithm for 3D free-form surface scanning measurement. It has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data; next, the data redundancy existing in the compressed point cloud is further identified and eliminated; then, we can get the final compressed point cloud data. Finally, the experiment is conducted, and the results demonstrate that the proposed algorithm is capable of obtaining high-quality data compression results with higher data compression ratios than other existing on-line point cloud data compression/reduction methods. Keywords: data compression; data reduction; free-form surface; point cloud; scanning measurement; redundancy identifying; redundancy eliminating; geometric feature similarity 1. Introduction With the rapid development of modern industry, three-dimensional (3D) free-form surface parts are being utilized more and more widely. These involve, but are not limited to, aviation, aerospace, shipbuilding, automotive, biomedical and home appliance industries [1,2]. Recently, the automated 3D digitization of free-form surface objects has been widely applied in many areas, such as additive manufacturing (3D printing), rapid prototyping, reverse engineering, civil buildings, medical prosthetics and clinical diagnosis [3–13]. Scanning measurement is one of the key technologies for digitizing 3D physical models with free-form surfaces [14–17]. Unfortunately, in order to obtain a high-quality profile of a measured surface, scanning measuring devices have to produce massive amounts of point cloud data with great sampling rates, and not all these points are indispensable [18–20]. Bottlenecks arise from the inefficiencies of storing, manipulating and transferring them [21]. Furthermore, the parametric Appl. Sci. 2018, 8, 2556; doi:10.3390/app8122556 www.mdpi.com/journal/applsci Appl. Sci. 2018, 8, 2556 2 of 18 modelling from these massive amount of point cloud data is a time-consuming task [22–24]. For this reason, compressing the measured point data while maintaining the required accuracy is a crucial task during the scanning measuring process [25]. Herein, the required accuracy is a threshold of distance, which is preset to a constant positive integer before the beginning of scanning measurement. The accuracy of a certain data compression algorithm is characterized by the distance from each sampled point in the initial dense point cloud data to the surface generated by the compressed point cloud. Describing a measured surface with the least point data while guaranteeing a certain data compression accuracy is always an expectation [26,27]. Therefore, a high-quality point cloud data compression algorithm for 3D free-form surface scanning measurement is being pursued constantly [28]. Experts and scholars around the world have been paying more and more attention to this issue, and a number of point cloud data compression/reduction algorithms for free-form/irregular surface scanning measurement have been developed. Lee et al. [29] proposed an algorithm for processing point cloud data obtained by laser scanning devices. This algorithm adopts a one-directional (1D) or bi-directional (2D) non-uniform grid to reduce the amount of point cloud data. Chen et al. [5] presented a data compression method based on a bi-directional point cloud slicing strategy for reverse engineering. This method can preserve local details (geometric features in both two parametric directions) when performing data compression. Ma and Cripps [30] proposed a new data compression algorithm for surface points to preserve the original surface points. The error metric is defined as the relative Hausdorff distance between two principal curvature vector sets for surface shape comparison. After comparison, the difference between the compressed data points and original data points can be obtained. Therefore, redundant points are repeatedly removed until the difference induced exceeds the specified tolerance. Smith, Petrova, and Schaefer [31] presented a progressive encoding and compression method for surfaces generated from point cloud data. At first, an octree is built whose nodes contain planes that are constructed as the least square fit of the data within that node. Then, this octree is pruned to remove redundant data while avoiding topological changes created by merging disjointed linear pieces. Morell et al. [32] presented a geometric 3D point cloud lossy compression system based on plane extraction, which represents the points of each scene plane as a Delaunay triangulation and a set of points/area information. This compression system can be customized to achieve different data compression or accuracy ratios. The above methods have focused on optimizing data compression quality based on building and processing polyhedral models or numerical iterative calculations. Nevertheless, they are all off-line data compression algorithms and can only compress the point cloud data of a whole measured surface after data acquisition. In other words, they cannot perform online data compression during real-time measurement. Data acquisition and data compression processes are completely separate. A large amount of redundant point cloud data occupies a great deal of storage space in scanning measuring devices. Moreover, the transmission and processing of point cloud data still takes up a significant amount of time and hardware resources. This problem has attracted the attention of many scholars and engineers, and they have proposed quite a number of on-line point cloud data compression/reduction methods. Lu et al. [33] adopted the chordal method to compress point cloud data, and realized the on-line data compression of point cloud data during real-time scanning measurement for the first time. ElKott and Veldhuis [34] presented an automatic surface sampling approach based on scanning isoparametric lines. The sampling locations are confirmed by the deviations between the alternative geometry and sampled model, and the location of each sampling line is confirmed by the curvature of the sampled surface model. Wozniak, Balazinski, and Mayer [35] presented a point cloud data compression method based on fuzzy logic and the geometric solution of an arc at each measured point. This is an on-line data compression method and can be used in the surface scanning measuring process of coordinate measuring machines (CMMs). Jia et al. [36] proposed an on-line data compression method based on the equal-error chordal method and isochronous sampling. In order to solve the problem of massive data storage, dual-buffer and dual-thread dynamic storage is adopted. Tao et al. [37] found that the essence of all the above on-line point cloud data compression methods is the chordal method, which specifies that all discrete dense Appl. Sci. 2018, 8, 2556 3 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 3 of 20 point sets are connected by straight segments. Therefore, the surface reconstructed by the compressed methods is the chordal method, which specifies that all discrete dense point sets are connected by point cloud will be full of cusp points, and so we cannot obtain a smooth interpolated surface. In view straight segments. Therefore, the surface reconstructed by the compressed point cloud will be full of of this limitation, they presented an on-line point cloud data extraction algorithm using bi-Akima cusp points, and so we cannot obtain a smooth interpolated surface. In view of this limitation, they spline interpolation. presented an on-line point cloud data extraction algorithm using bi-Akima spline interpolation. Although the above methods implement on-line point cloud data compression, they can only Although the above methods implement on-line point cloud data compression, they can only eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser triangle measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser displacement sensors [39], linear structured light systems [40], industrial computed tomography (CT) triangle displacement sensors [39], linear structured light systems [40], industrial computed systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The geometric feature tomography (CT) systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The similarity between such scanning layers is bound to result in data redundancy, which makes it possible geometric feature similarity between such scanning layers is bound to result in data redundancy, to further compress the point cloud data during the scanning measuring process. Therefore, this study which makes it possible to further compress the point cloud data during the scanning measuring focuses on identifying and eliminating this kind of data redundancy caused by geometric feature process. Therefore, this study focuses on identifying and eliminating this kind of data redundancy similarity between adjacent scanning layers. After that, the massive amount of point cloud data can be caused by geometric feature similarity between adjacent scanning layers. After that, the massive further compressed during the 3D free-form surface measuring process. amount of point cloud data can be further compressed during the 3D free-form surface measuring The contents of this paper consist of four sections. In Section 2, the innovative methodology of process. the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement is The contents of this paper consist of four sections. In Section 2, the innovative methodology of described in detail. In Section 3, the proposed algorithm was tested in the real-time scanning measuring the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement process is descr and ibed compar in det ed ail. with In S existing ection 3methods. , the proposed Finally alg , some orithm wa conclusions s tested in are t drawn he reafr l-t om ime this scapaper nning in measuring process and compared with existing methods. Finally, some conclusions are drawn from Section 4. this paper in Section 4. 2. Innovative Methodology 2. Innovative Methodology As shown in Figure 1, the overall process of on-line point cloud data compression in this work As shown in Figure 1, the overall process of on-line point cloud data compression in this work consists of four steps. in Step 1, the initial point cloud flow is obtained by 3D scanning measuring consists of four steps. In Step 1, the initial point cloud flow is obtained by 3D scanning measuring devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning path is adopted (Figure 2). in Step 2, the initial point cloud data flow is immediately compressed by path is adopted (Figure 2). In Step 2, the initial point cloud data flow is immediately compressed by the chordal method [36] or bi-Akima method [37], both of which compress the amount of point cloud the chordal method [36] or bi-Akima method [37], both of which compress the amount of point data based on the data redundancy in the current single scanning layer. In Step 3, the data redundancy cloud data based on the data redundancy in the current single scanning layer. In Step 3, the data in the compressed point cloud which is obtained in the previous step is further identified. In Step 4, redundancy in the compressed point cloud which is obtained in the previous step is further the identified redundant point data is eliminated, and then we can obtain the final compressed identified. In Step 4, the identified redundant point data is eliminated, and then we can obtain the point cloud. At last, the final compressed data flow is transmitted to the storage space of the final compressed point cloud. At last, the final compressed data flow is transmitted to the storage measurement system. space of the measurement system. Figure 1. The overall process of on-line data compression for 3D free-form surface scanning Figure 1. The overall process of on-line data compression for 3D free-form surface scanning measurement. measurement. Herein, the real-time performance of the proposed data compression algorithm needs to be Herein, the real-time performance of the proposed data compression algorithm needs to be further analyzed and described. The path planning is performed before the start of the scanning further analyzed and described. The path planning is performed before the start of the scanning measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The distance measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The between the adjacent scanning layers is determined by the preset measuring accuracy. The measured distance between the adjacent scanning layers is determined by the preset measuring accuracy. The surface is cut by the scanning layers to form a number of corresponding scanning lines. As shown measured surface is cut by the scanning layers to form a number of corresponding scanning lines. As in Figure 2, there are two planning modes for scanning directions: (i) the progressive scanning shown in Figure 2, there are two planning modes for scanning directions: (i) the progressive mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring device in scanning mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring Step 1 will continuously transmit the initial point cloud data flow to the data compressor in Step 2. device in Step 1 will continuously transmit the initial point cloud data flow to the data compressor in The compressor performs data compression immediately after receiving all initial point data of a Step 2. The compressor performs data compression immediately after receiving all initial point data single scanning layer, rather than waiting for the entire surface to be scanned before performing data of a single scanning layer, rather than waiting for the entire surface to be scanned before performing compression. That is, each time the point cloud data in the current scanning layer is completely data compression. That is, each time the point cloud data in the current scanning layer is completely transmitted to the compressor, the subsequent data compression algorithm is executed immediately. transmitted to the compressor, the subsequent data compression algorithm is executed immediately. Appl. Sci. 2018, 8, 2556 4 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 4 of 20 Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which we Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which call an on-line data compression method. we call an on-line data compression method. Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement. Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement. The The flow flow chart charof t of t thih sis algorithm algorithm is is illustrated illustrated in in F Figur igure e 3, 3, an and d it its s pr principle inciple isis de described scribed in det in detail ail as follows: as follows: Appl. Sci. 2018, 8, x FOR PEER REVIEW 5 of 20 Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm. Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm. 2.1. Data Redundancy Identification In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows: Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line is the current scanning line during the on-line measuring process, and P represents ij , the j th point in scanning line i . If , a shape-preserving piecewise bicubic Hermite curve j ≥ 2 can be built to predict the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1≤<kj and the coordinates of point P be (, xy ,z ) ; then, a series of specific Hermite interpolation ik , kk k polynomials can be determined by Appl. Sci. 2018, 8, 2556 5 of 18 2.1. Data Redundancy Identification In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows: Appl. Sci. 2018, 8, x FOR PEER REVIEW 6 of 20 Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line i is the current scanning line during the on-line measuring process, and P represents the jth point i,j Hx ()=+ yαα ()x y ()x+y'β (x)+y' β ()x ykk k++ 11k k k k+1 k+1 in scanning line i. If j  2, a shape-preserving piecewise bicubic Hermite curve can be built to predict , (1) Hx ()=+ zαα ()x z ()x+z'β (x)+z' β ()x  zkk k++ 11k k k k+1 k+1 the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1  k < j and the coordinates of point where P be (x , y , z ); then, a series of specific Hermite interpolation polynomials can be determined by i,k k k k  xx−− xx kk +1 0 0 α () x =+ 1 2 H (x) = y a (x) + y a (x) + y b (x) +y b (x) y k k k k+1 k+1 k k+1 k  k+1 xx−−x x , (1) kk++ 11 k k 0 0 H (x) = z a (x) + z a (x) + z b (x) + z b (x) k k k+1 k+1 k k k+1 k+1   xx−− xx kk +1 α () x =+ 1 2   k +1 where   xx−−  x x  kk++ 11 k 2k xx xx > k k+1 , (2) > a (x) = 1 + 2 x x x x k+1 k k k+1   >  xx − > k +1 2 > xx β ()xx =−x xx () k+1 k <  kk a (x) = 1 + 2  k+1 xxx − x x x k k+1 k+1 k  kk +1 , (2) xx k+1 2 b (x) = (x x ) k k  x xxx − k k+1  k > β ()xx =−x () 2  > kk ++ 11  xx :  xx − b (x) = (x x )  kk +1  k+1 k+1 x x k+1 k 0 0 0 0 and and the the f first irst deri derivatives vatives of of yy ', y , y ', z , , z z ' , z can ' be caestimated n be estimby ated b they following the following f formulas. ormulas. k k+1 k k k+1 k + 1 k k + 1 Figure Figure 4. 4. The schematic The schematic diagram diagram of data of data red redundancy undancy identi identification. fication. When 1 < k < j: When 1<<kj : y y y y k+1 k k k1  0, i f : yy−−  y y< 0 x kk +1- x x xk k1 0 0 k+1 k k k1 0, if : ⋅< 0 y = f (x ) = , (3) y k k y y y y y y y y k+1 k k k1 k+1 k k k1 xx−−x x  + , i f :   0 kk +1-k k1 2 x x x x x x x x k+1 k k k1 k+1 k k k1 yf''==(x) , (3) ky k  yy−−y y yy− y−y kk++ 1-k k1 kk1 k k-1 +⋅ , if : ≥ 0   z z z z  k+1 k k k1 < 2 xx−−x x xx− x−x 0, i f :  < 0 kk++ 1-k k1 kk1 k k-1  x x x x 0 0 k+1 k k k1 z = f (x ) = . (4) k z k z z z z z z z z k+1 k k k1 k+1 k k k1 + , i f :   0 2 x x x x x x x x k+1 k k k1 k+1 k k k1  zz−−z z kk +1-k k1 0, if : ⋅< 0 When k = 1: xx−−x x  kk +1-k k1 zf''==(x) . (  (4) kz k y y 2 1  z −− z zz z −z zz− 0, i1 f : d  < 0  kk y++ 1-k k1 kk1 k k-1 x x 0 2 1 +⋅ , if : ≥ 0  y = , (5) y  y y y (y y )(y y ) 1  2 1 2 1 2 1 3 2 2 xx−− x x xx− x−x 3 kk ,++ i1- f : d >k 3 k1 & kk1 <k0 k-1  y x x x x (x x )(x x ) 2 1 2 1 2 1 3 2 When k = 1 :  yy − 0, if : d⋅< 0 xx −  21 y ' = , (5) yy−− yy () yy−−(y y)  21 21 21 3 2 3⋅> , if : d 3 & < 0 xx−− xx () x−x(x−x) 21 21 2 1 3 2  Appl. Sci. 2018, 8, 2556 6 of 18 z z 2 1 0, i f : d  < 0 x x 0 2 1 z = , (6) 1 (z z )(z z ) z z z z 2 1 2 1 2 1 3 2 3 , i f : d > 3 & < 0 j j x x x x (x x )(x x ) 2 1 2 1 2 1 3 2 in which (x +x 2x )(y y ) (x x )(y y ) 3 2 1 2 1 2 1 3 2 d = (x x )(x x ) (x x )(x x ) 2 1 3 1 3 2 3 1 . (7) (x +x 2x )(z z ) (x x )(z z ) 3 2 2 2 3 2 1 1 1 d = (x x )(x x ) (x x )(x x ) 2 1 3 1 3 2 3 1 When k = j: y y j j1 0, i f : d  < 0 x x j j1 y = , (8) j y y y y (y y )(y y ) j j1 j j1 j j1 j1 j2 3 , i f : e > 3 & < 0 x x x x (x x )(x x ) j j1 j j1 j j1 j1 j2 z z j j1 0, i f : d  < 0 x x j j1 z = , (9) j z z z z (z z )(z z ) j j1 j j1 j j1 j1 j2 3 , i f : je j > 3 & < 0 x x x x (x x )(x x ) j j1 j j1 j j1 j1 j2 in which (2x x x )(y y ) (x x )(y y ) j j1 j2 j j1 j j1 j1 j2 e = (x x )(x x ) (x x )(x x ) j j1 j j2 j1 j2 j j2 . (10) (2x x x )(z z ) (x x )(z z ) j j1 j2 j j1 j j1 j1 j2 e = (x x )(x x ) (x x )(x x ) j j1 j j2 j1 j2 j j2 Herein, based on the compressed point cloud data flow from Step 2, the shape-preserving piecewise bicubic Hermite polynomials can be created according to the above algorithm. Then, Hermite extrapolation is performed to create a predicted curve, which is marked in blue as shown in Figure 4, and its analytical formula can be described as follows: 0 0 H (x) = y a (x) + y a (x) + y b (x) + y b (x) y j1 j1 j j j1 j j1 j . (11) 0 0 H (x) = z a (x) + z a (x) + z b (x) + z b (x) j1 j1 j j j1 j1 j j After that, an estimated point P is created to move along the predicted curve with a stepping est distance of l. P is the starting point of P . Meanwhile, a bounding sphere is built with point P as est est i,j the center. The radius of the sphere is R = kh , (12) s ph ls in which k 2 [1, 2] is the radius adjustment coefficient, and h is the distance between two parallel ls scannning layers. As shown in Figure 4, the predicted curve with estimated point P are used to est search for the neighbor point P from the previous scanning line i1. The necessary and sufficient nb conditions for point P as the neighbor point of P are P P  R , which means that P is inside est est nb nb s ph nb the bounding sphere with point P as its center. At the very beginning, P coincides with P . At this est est i,j point, there are two possibilities: (i) P is inside the bounding sphere (i.e., P P  R ), or (ii) i1,u i1,u i,j s ph P is outside the bounding sphere. In case (i), P is the first found neighbor point. As P moves i1,u i1,u est along the scanning direction with a stepping distance of l, if P P < P P , then P is est i1,u i,j i1,u i1,u the neighbor point of P ; otherwise, discard point P , as it is not the neighbor point of P but of est i1,u est P . In case (ii), there is no operation because no neighbor point has been found. After case (i) or case i,j (ii) is completed, point P continues to move forward along the scanning direction until the neighbor est point P of P is found; if the neighbor point cannot be found, the search is stopped. est nb If the neighbor point P is found in line i 1 (e.g., P in Figure 4), then a new bounding nb i1,u+1 sphere is built with P as the center and the radius is R . After that, we use this new bounding i1,u+1 s ph sphere to search for the neighbor point of P in line i 2; and if the neighbor point cannot be i1,u+1 found, we stop searching. Next, we take the new neighbor point in line i 2 (e.g., P ) as a new i2,v+1 center to build a bounding sphere and repeat the above process until we find three neighbor points in different scanning lines (e.g., P , P , P in Figure 4). i1,u+1 i2,v+1 i3,w+1 Appl. Sci. 2018, 8, 2556 7 of 18 Based on the neighbor point set {P , P , P }, the coordinates of estimated point P i1,u+1 i2,v+1 i3,w+1 est can be fixed uniquely. As shown in Figure 4, a bi-cubic Hermite curve is built, and it can be expressed as 0 0 H (y) = x a (y) + x a (y) + x b (y) + x b (y) x i2 i2 i1 i1 i2 i2 i1 i1 , (13) 0 0 H (y) = z a (y) + z a (y) + z b (y) + z b (y) i2 i2 i1 i1 i2 i2 i1 i1 in which y is an independent variable; a (y), a (y), b (y), b (y) are obtained by Equation (2); i1 i2 i1 i2 0 0 0 0 x , z are acquired by Equations (3) and (4); and x , z are obtained by Equations (8)–(10). i2 i2 i1 i1 Obviously, the bicubic Hermite curve must be in the curved surface with the equation 0 0 H (y) = x a (y) + x a (y) + x b (y) + x b (y), (14) i2 i2 i1 i1 i2 i2 i1 i1 and the predicted curve will pass through this curved surface. Therefore, estimated point P can be est fixed at the intersection of the predicted curve and the curved surface which is described in Equation (14). That is, the coordinates of estimated point P (x , y , z ) can be determined by Equations (11) and (14). est est est est 2.2. Data Redundancy Elimination After the coordinates of estimated point P are determined, we use P to replace P in est est i,j+1 scanning line i. Afterwards, the new point set that contains P is used for bi-Akima interpolation, est and there is a deviation h between the interpolated curve and each initial sampled point Q , where i i,k k is the scanning line number and k is the serial number of initial point cloud in line i. As mentioned earlier, the initial point cloud is obtained by 3D scanning measuring devices using the isochronous or equidistant sampling method in Step 1 as shown in Figure 1. The deviation h can be obtained by i,k 2 2 2 h = MIN (s) = MIN (x x ) + (y y ) + (z z ) , (15) i,k k k k x2(X ,X ) x2(X ,X ) j j+1 j j+1 where point Q (x , y , z ) is an initial sampled point between P (X , Y , Z ) and P (X , Y , Z ), k k k k i,j j j j i,j+1 j+1 j+1 j+1 and P (x, y, z) is the point in interpolated curve that makes the distance S shortest. Then, the max curv deviation d of the whole curve (i.e., from P to P ) can be calculated by the following formula: max est i,1 d = MAX(h ), (16) max i,k which is compared with the required accuracy #. If d > #, discard point P . If d < #, delete max est max current compressed point P which is input from Step 2. Next, create an estimative flag F = 1 i,j+1 i,j+1 to replace point P . This flag takes up only one bit of data storage space. After completing the i,j+1 above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j = j + 1, build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P to loop through the above data redundancy est identification and elimination process until P is the end point of the current scanning line i or the i,j data sampling is over. In addition, when P is the end point of line i, make i = i + 1 and continue to i,j loop the above algorithm until the measurement is completed. 3. Experimental Results In order to verify the feasibility of the proposed methodology, some experiments were performed in this section. 3.1. Test A The on-line point cloud data compression algorithm was tested in the industrial real-time measuring process and compared with existing methods (chordal method and bi-Akima method). Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 20 shortest. Then, the max deviation d of the whole curve (i.e., from P to P ) can be calculated max i ,1 est by the following formula: dh = MAX( ) , (16) max ik , which is compared with the required accuracy . If , discard point . If , delete ε d > ε P d < ε max est max current compressed point P which is input from Step 2. Next, create an estimative flag F =1 ij,1 + ij,+1 to replace point P . This flag takes up only one bit of data storage space. After completing the ij,1 + above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j =+ j 1 , build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P to loop through the above data est redundancy identification and elimination process until P is the end point of the current scanning ij , line i or the data sampling is over. In addition, when P is the end point of line i, make ii =+ 1 ij , and continue to loop the above algorithm until the measurement is completed. 3. Experimental Results In order to verify the feasibility of the proposed methodology, some experiments were performed in this section. Appl. Sci. 2018, 8, 2556 8 of 18 3.1. Test A The on-line point cloud data compression algorithm was tested in the industrial real-time The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial measuring process and compared with existing methods (chordal method and bi-Akima method). The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as shown computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer (OEM) shown in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer application that runs on the host computer of the CNC system. The product model of the contact 3D (OEM) application that runs on the host computer of the CNC system. The product model of the scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical characteristics contact 3D scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical of the measuring instrument are shown in Table 1. characteristics of the measuring instrument are shown in Table 1. Appl. Sci. 2018, 8, x FOR PEER REVIEW 10 of 20 (a) (b) (c) (d) Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part. numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part. Table 1. Detailed technical characteristics of the measuring system. Table 1. Detailed technical characteristics of the measuring system. Technical Characteristics Values Technical Characteristics Values Scope of X axis 2400 mm Scope of X axis 2400 mm Positioning accuracy of X axis 0.019 mm/1000 mm Positioning accuracy of X axis 0.019 mm/1000 mm Repeatability of X axis 0.016 mm/1000 mm Repeatability of X axis 0.016 mm/1000 mm Scope of Z axis 1200 mm Scope of Z axis 1200 mm Positioning Position accuracy ing accurac of Z axis y of Z axis 0.0.010 010 mm/1000 mm/1000 mm mm Repeatability of Z axis 0.003 mm/1000 mm Repeatability of Z axis 0.003 mm/1000 mm Positioning accuracy of C axis 6.05” Positioning accuracy of C axis 6.05″ Repeatability of C axis 2.22” Repeatability of C axis 2.22″ Measuring range of scanning probe 1 mm Measuring range of scanning probe ±1 mm Accuracy of scanning probe 8 m Accuracy of scanning probe ±8 μm Repeatability of scanning probe 4 m Repeatability of scanning probe ±4 μm Stylus length of probe 100 mm/150 mm/200 mm Stylus length of probe 100 mm/150 mm/200 mm Contact force (with stylus of 200 mm) 1.6 N/mm Contact force (with stylus of 200 mm) 1.6 N/mm Weight of scanning probe 1.8 kg Weight of scanning probe 1.8 kg The measured part is a half-ellipsoidal surface which is welded together by seven pieces of The measured part is a half-ellipsoidal surface which is welded together by seven pieces of thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638. Appl. Sci. 2018, 8, 2556 9 of 18 semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638. Appl. Sci. 2018, 8, x FOR PEER REVIEW 11 of 20 Figure 6. Spatial distribution of initial point cloud data. Figure 6. Spatial distribution of initial point cloud data. Using the same initial point cloud data set as shown in Figure 6, the comparison of data Using the same initial point cloud data set as shown in Figure 6, the comparison of data compr compression ession performance performance is is ma made de between the pr between the proposed oposed m method, ethod, chord chordal al met method hod and andbi-Akima bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the results results o of the f thdata e datcompr a compres ession sion per performance formance including includin the g t number he number of points of point andsdata and dat compr a compr essione ratio, ssion ratio, where the compression ratio is defined as the ratio between the uncompressed size and where the compression ratio is defined as the ratio between the uncompressed size and compressed size: compressed size: Uncompressed Size Number of Initial Points Compression Ratio = = . (17) Uncompressed Size Number of Initial Points Compressed Size Number of Compressed Points Compression Ratio = = . (17) Compressed Size Number of Compressed Points Obviously, the proposed method has a higher data compression ratio than the chordal method Obviously, the proposed method has a higher data compression ratio than the chordal method and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the same required accuracy. The number of data points obtained by the proposed method is about half of same required accuracy. The number of data points obtained by the proposed method is about half that obtained by the bi-Akima method under the same required accuracy. of that obtained by the bi-Akima method under the same required accuracy. Table 2. Compression performance under different required accuracies. Table 2. Compression performance under different required accuracies. Number of Points Compression Ratio Required Number of Points Compression Ratio Required Accuracy (mm) Chordal Bi-Akima Proposed Chordal Bi-Akima Proposed Accuracy Chordal Bi-Akima Proposed Chordal Bi-Akima Proposed Method Method Method Method Method Method (mm) Method Method Method Method Method Method 0.001 237,363 122,929 67,448 1.15 2.22 4.04 0.002 189,824 120,952 67,121 1.44 2.25 4.06 0.001 237,363 122,929 67,448 1.15 2.22 4.04 0.005 152,674 110,175 63,813 1.79 2.47 4.27 0.002 189,824 120,952 67,121 1.44 2.25 4.06 0.01 136,027 93,588 51,062 2.00 2.91 5.34 0.005 152,674 110,175 63,813 1.79 2.47 4.27 0.02 123,891 71,629 41,862 2.20 3.81 6.51 0.050.01 103,205 136,027 9 44,072 3,588 528,837 1,062 2.02.64 0 2.91 6.19 5.34 9.45 0.1 87,008 27,894 15,974 3.13 9.77 17.07 0.02 123,891 71,629 41,862 2.20 3.81 6.51 0.2 61,124 12,191 7102 4.46 22.36 38.39 0.05 103,205 44,072 28,837 2.64 6.19 9.45 0.5 28,473 5594 3140 9.58 48.74 86.83 0.1 87,008 27,894 15,974 3.13 9.77 17.07 1 9029 3969 2217 30.20 68.69 122.99 0.2 61,124 12,191 7102 4.46 22.36 38.39 0.5 28,473 5594 3140 9.58 48.74 86.83 1 9029 3969 2217 30.20 68.69 122.99 Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal Appl. Sci. 2018, 8, 2556 10 of 18 Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio Appl. Sci. 2018, 8, x FOR PEER REVIEW 12 of 20 increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method with the bi-Akima method in the subsequent experiments. method with the bi-Akima method in the subsequent experiments. Figure 7. Data compression ratios under different required accuracies. Figure 7. Data compression ratios under different required accuracies. To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of compressed between the proposed method and bi-Akima method by displaying spatial distributions of point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed point sets under different required accuracies. Subfigures a, d, g and j show the point compressed by bi-Akima method while subfigures b, e, h and k give the point cloud distribution after cloud distribution compressed by bi-Akima method while subfigures b, e, h and k give the point data redundancy identification by the proposed method, with the identified redundant points marked cloud distribution after data redundancy identification by the proposed method, with the identified in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures redu show ndan the t pdistributions oints markeof d i the n r final ed. I compr n sub essed figurpoint es c, f cloud , i adata. nd l, By the i contrast, dentifwe ied redunda can clearlynobserve t points are the difference of point cloud density between these two methods under the same required accuracy. eliminated. These subfigures show the distributions of the final compressed point cloud data. By Take subfigures g–i, for example: when using the bi-Akima method, we can observe that there are contrast, we can clearly observe the difference of point cloud density between these two methods many curves roughly along the welded region (Figure 8g), because the bi-Akima method can only deal under the same required accuracy. Take subfigures g–i, for example: when using the bi-Akima with the point set in the current scanning line and the data redundancy outside the current scanning method, we can observe that there are many curves roughly along the welded region (Figure 8g), line cannot be eliminated. With the involvement of our proposed method, redundant data points are because the bi-Akima method can only deal with the point set in the current scanning line and the identified and marked in red (Figure 8h) and the data redundancy in the adjacent scanning layers is data redundancy outside the current scanning line cannot be eliminated. With the involvement of eliminated and the final compressed point cloud data is obtained (Figure 8i). our proposed method, redundant data points are identified and marked in red (Figure 8h) and the To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of data redundancy in the adjacent scanning layers is eliminated and the final compressed point cloud deviation between each initial sampled point and the interpolated surface obtained from the final data is obtained (Figure 8i). compressed point cloud data under different required accuracies. As can be seen, all the deviations are To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of within the allowable range of required accuracy. Our method can tightly control the deviation within deviat theion bet error tolerance ween earange ch init (i.e., ial sthe ampled deviation point between and theach e interpo initial late sampled d surface point obtaine and interpolation d from the final curve is less than or equal to the required accuracy). In addition, deviations are far lower than the compressed point cloud data under different required accuracies. As can be seen, all the deviations required accuracy in most of the measured area. In Figure 9d, there is an interesting and noteworthy are within the allowable range of required accuracy. Our method can tightly control the deviation phenomenon: the upper right sector has a higher deviation. As mentioned earlier, the measured within the error tolerance range (i.e., the deviation between each initial sampled point and part is a large thin-walled surface which is welded together by seven pieces of aluminum alloy sheet interpolation curve is less than or equal to the required accuracy). In addition, deviations are far (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its size is very large lower than the required accuracy in most of the measured area. In Figure 9d, there is an interesting (the semi-major axis of the ellipse is 1450 mm). The part has undergone great deformation after and noteworthy phenomenon: the upper right sector has a higher deviation. As mentioned earlier, welding. There is a large and random deviation between each welded part and the original design the measured part is a large thin-walled surface which is welded together by seven pieces of size. According to past experience, the maximum deviation in a local section can even reach 3 mm. aluminum alloy sheet (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its Consequently, we infer that the upper right sector has a higher deviation because of deformation in size is very large (the semi-major axis of the ellipse is 1450 mm). The part has undergone great this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy deformation after welding. There is a large and random deviation between each welded part and the original design size. According to past experience, the maximum deviation in a local section can even reach 3 mm. Consequently, we infer that the upper right sector has a higher deviation because of deformation in this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy ε =1 mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range. Appl. Sci. 2018, 8, 2556 11 of 18 #= 1 mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 20 formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range. Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 20 Figure 8. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε =0.001 mm ; (b) redundancy identification, ε =0.001 mm ; (c) Figure 8. Spatial distributions of compressed point cloud data under different required accuracies #: Figure 8. Spatial distributions of compressed point cloud data under different required accuracies redundancy elimination, ε =0.001 mm ; (d) bi-Akima compression, ε =0.01 mm ; (e) redundancy (a) bi-Akima compression, # = 0.001 mm; (b) redundancy identification, # = 0.001 mm; (c) redundancy ε : (a) bi-Akima compression, ε =0.001 mm ; (b) redundancy identification, ε =0.001 mm ; (c) elimination, # = 0.001 mm; (d) bi-Akima compression, # = 0.01 mm; (e) redundancy identification, ε =0.01 mm ε =0.01 mm identification, ; (f) redundancy elimination, ; (g) bi-Akima compression, ε =0.001 mm ε =0.01 mm redundancy elimination, ; (d) bi-Akima compression, ; (e) redundancy # = 0.01 mm; (f) redundancy elimination, # = 0.01 mm; (g) bi-Akima compression, # = 0.1 mm; ε =0.1 mm ε =0.1 mm ε =0.1 mm ; (h) redundancy identification, ; (i) redundancy elimination, ; (j) identification, ε =0.01 mm ; (f) redundancy elimination, ε =0.01 mm ; (g) bi-Akima compression, (h) redundancy identification, # = 0.1 mm; (i) redundancy elimination, # = 0.1 mm; (j) bi-Akima bi-Akima compression, ε =1 mm ; (k) redundancy identification, ε =1 mm ; (l) redundancy ε =0.1 mm ; (h) redundancy identification, ε =0.1 mm ; (i) redundancy elimination, ε =0.1 mm ; (j) compression, # = 1 mm; (k) redundancy identification, # = 1 mm; (l) redundancy elimination, elimination, ε =1 mm . # = 1 mm. ε =1 mm ε =1 mm bi-Akima compression, ; (k) redundancy identification, ; (l) redundancy elimination, ε =1 mm . Figure 9. Cont. Appl. Sci. 2018, 8, 2556 12 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 ε =0.001 mm Figure 9. Figure Spati 9. Spatial al didistributions stributions of dev of deviation iation uunder nder different different required accuracies required accuracies #:ε(a : ( ) a#) = 0.001 mm; ; (b) # = 0.01 mm; (c) # = 0.1 mm; (d) # = 1 mm. ε =0.01 mm ε =0.1 mm ε =1 mm (b) ; (c) ; (d) . 3.2. Test B 3.2. Test B The overall structure of the model in Test A is relatively simple. In order to further verify the The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. tested model Figure is 11a p shows iece o the f jeinitial welry, point which cloud is inl data aid wit acquisition h 30 diam result. ondsThe of d pr iff ogr erent essive sizes scanning . mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ε =0.001 mm ; performance, including the number of points and data compression ratio. Obviously, the proposed ε =0.01 mm ε =0.1 mm ε =1 mm (b) ; (c) ; (d) . method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same 3.2. Test B required accuracy. Figure 12 provides the comparison of the compression ratios between these two methods under The overall structure of the model in Test A is relatively simple. In order to further verify the different required accuracies. With the decrease in accuracy requirements, the compression ratio universality and adaptability of the proposed method, we chose a more complex surface model with Figure 10. The tested complex surface model: jewelry. increases for all methods; however, for all levels of required accuracy, our proposed compression a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the method manifests a superior compression ratio than the bi-Akima method. tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 10. The tested complex surface model: jewelry. Figure 10. The tested complex surface model: jewelry. Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode Figure 13 visually illustrates the difference between the proposed method and bi-Akima method and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal by displaying spatial distributions of the compressed point sets under different required accuracies. direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between Subfigures a, d, g and j show the point cloud distribution compressed by the bi-Akima method, adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 11. Spatial distribution of initial point cloud data. Figure 11. Spatial distribution of initial point cloud data. Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 ε =0.001 mm Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ; (b) ε =0.01 mm ; (c) ε =0.1 mm ; (d) ε =1 mm . 3.2. Test B The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. Appl. Sci. 2018, 8, 2556 13 of 18 while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the Figure 10. The tested complex surface model: jewelry. vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode outside the current scanning line cannot be eliminated. With the involvement of our proposed method, and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal redundant data points are identified and marked in red (Figure 13k), the data redundancy in adjacent direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between Appl. Sci. 2018, 8, x FOR PEER REVIEW 15 of 20 scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 13l). adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression performance, including the number of points and data compression ratio. Obviously, the proposed method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same required accuracy. Table 3. Compression performance under different required accuracies. Required Number of Points Compression Ratio Accuracy Bi-Akima Proposed Bi-Akima Proposed (mm) Method Method Method Method 0.001 18,906 8516 3.35 7.44 0.002 16,857 7609 3.76 8.33 0.005 14,323 6563 4.42 9.66 Figure 11. Spatial distribution of initial point cloud data. Figure 11. Spatial distribution of initial point cloud data. 0.01 12,432 5743 5.10 11.04 Table 3. Compression performance under different required accuracies. 0.02 10,720 5007 5.91 12.66 0.05 8767 4232 7.23 14.98 Number of Points Compression Ratio Required Accuracy (mm) 0.1 7190 3535 8.81 17.93 Bi-Akima Method Proposed Method Bi-Akima Method Proposed Method 0.2 5892 2974 10.76 21.31 0.001 18,906 8516 3.35 7.44 0.002 16,857 7609 3.76 8.33 0.5 4625 2412 13.70 26.28 0.005 14,323 6563 4.42 9.66 1 4204 2213 15.08 28.64 0.01 12,432 5743 5.10 11.04 0.02 10,720 5007 5.91 12.66 0.05 8767 4232 7.23 14.98 Figure 12 provides the comparison of the compression ratios between these two methods under 0.1 7190 3535 8.81 17.93 different required accuracies. With the decrease in accuracy requirements, the compression ratio 0.2 5892 2974 10.76 21.31 0.5 4625 2412 13.70 26.28 increases for all methods; however, for all levels of required accuracy, our proposed compression 1 4204 2213 15.08 28.64 method manifests a superior compression ratio than the bi-Akima method. Figure 12. Figure 12. Data Data com compr press ession ion r ratios atios u under nder dif differ ferent requ ent requir ired accu ed accuracies. racies. Figure 13 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of the compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by the bi-Akima method, while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and Appl. Sci. 2018, 8, x FOR PEER REVIEW 16 of 20 Appl. marked Sci. 2018 in, re 8, 2556 d (Figure 13k), the data redundancy in adjacent scanning layers is eliminated a 14 nd t of 18 he final compressed point cloud data is obtained (Figure 13l). Figure 13. Spatial distributions of compressed point cloud data under different required accuracies Figure 13. Spatial distributions of compressed point cloud data under different required accuracies #: ε =0.001 mm ε =0.001 mm ε : (a) bi-Akima compression, ; (b) redundancy identification, ; (c) (a) bi-Akima compression, # = 0.001 mm; (b) redundancy identification, # = 0.001 mm; (c) redundancy redundancy elimination, ε =0.001 mm ; (d) bi-Akima compression, ε =0.01 mm ; (e) redundancy elimination, # = 0.001 mm; (d) bi-Akima compression, # = 0.01 mm; (e) redundancy identification, ε =0.01 mm ε =0.01 mm # identification = 0.01 mm; , (f) redundancy ; (f) re elimination, dundancy# elim = 0.01 inatio mmn, ; (g) bi-Akima; ( compr g) bi-Aki ession, ma c # o=mpres 0.1 mm sion, ; (h) redundancy identification, # = 0.1 mm; (i) redundancy elimination, # = 0.1 mm; (j) bi-Akima ε =0.1 mm ; (h) redundancy identification, ε =0.1 mm ; (i) redundancy elimination, ε =0.1 mm ; (j) compression, # = 1 mm; (k) redundancy identification, # = 1 mm; (l) redundancy elimination, bi-Akima compression, ε =1 mm ; (k) redundancy identification, ε =1 mm ; (l) redundancy # = 1 mm. ε =1 mm elimination, . In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the from the final compressed point cloud data under different required accuracies. As can be seen, all deviations are within the allowable range of required accuracy, which proves that the proposed method the deviations are within the allowable range of required accuracy, which proves that the proposed can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial method can tightly control the deviation within the error tolerance range (i.e., the deviation between sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far less than the required accuracy in most of the measured area. Appl. Sci. 2018, 8, x FOR PEER REVIEW 17 of 20 each initial sampled point and interpolation curve is less than or equal to the required accuracy). In Appl. Sci. 2018, 8, 2556 15 of 18 addition, deviations are far less than the required accuracy in most of the measured area. ε =0.001 mm Figure 14. Spatial distributions of deviation under different required accuracies ε : (a) ; Figure 14. Spatial distributions of deviation under different required accuracies #: (a) # = 0.001 mm; (b (b ) ) # = ε =0 0.01 .01 m mm; m ; ((c c) ) #ε= =00.1 .1 m mm; m ; ((d d) ) #ε= =11 m mm. m . 4. Discussion 4. Discussion The experimental results in Section 3 indicate that the proposed on-line point cloud data The experimental results in Section 3 indicate that the proposed on-line point cloud data compression algorithm for free-form surface scanning measurement has the following features: compression algorithm for free-form surface scanning measurement has the following features: It can further compress point cloud data and obtain a higher data compression ratio than the • It can further compress point cloud data and obtain a higher data compression ratio than the existing methods under the same required accuracy. Its compression performance is obviously existing methods under the same required accuracy. Its compression performance is obviously superior to the bi-Akima and chordal methods; superior to the bi-Akima and chordal methods; It is capable of tightly controlling the deviation within the error tolerance range, and deviations in • It is capable of tightly controlling the deviation within the error tolerance range, and deviations most measured area are far less than the required accuracy; in most measured area are far less than the required accuracy; Test A preliminarily verifies the application feasibility of the proposed method in an industrial • Test A preliminarily verifies the application feasibility of the proposed method in an industrial environment. Test B demonstrates that the method is equally effective for complex surfaces with environment. Test B demonstrates that the method is equally effective for complex surfaces a large number of details, edges and sharp features, and it has stable performance; with a large number of details, edges and sharp features, and it has stable performance; The proposed method has the potential to be applied to industrial environments to replace • The proposed method has the potential to be applied to industrial environments to replace traditional on-line point cloud data compression methods (bi-Akima and chordal methods). traditional on-line point cloud data compression methods (bi-Akima and chordal methods). Its Its potential applications may be in the real-time measurement processes of scanning devices potential applications may be in the real-time measurement processes of scanning devices such such as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, linear linear structured light systems, industrial CT systems, etc. The application feasibility of this structured light systems, industrial CT systems, etc. The application feasibility of this method method needs to be further confirmed in subsequent case studies. needs to be further confirmed in subsequent case studies. However, the proposed method is not perfect and still has the following limitations. In future work, the following aspects need to be further developed: Appl. Sci. 2018, 8, 2556 16 of 18 This method can only handle 3D point cloud data streams and is not suitable for processing point cloud data containing additional high-dimensional information (e.g., 3D point cloud data with grayscale or color information). We will try to solve the above problem in our future research work; This method can only compress the point cloud data stream which is scanned layer by layer. If the 3D point cloud is randomly sampled and there are no regular scan lines (e.g., 3D measurement with speckle-structure light), our method cannot perform effective data compression. It is a huge challenge to solve the above problems. 5. Conclusions In an attempt to effectively compress dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents a novel on-line point cloud data compression algorithm which has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data obtained by 3D scanning measuring devices. Next, the data redundancy in the compressed point cloud obtained in the previous stage is further identified and eliminated, and then we can obtain the final compressed point cloud data. Finally, the proposed on-line point cloud data compression algorithm was tested in the real-time scanning measuring process and compared with existing methods (the chordal method and bi-Akima method). The experimental results have preliminarily verified the application feasibility of our proposed method in industrial environment, and shown that it is capable of obtaining high-quality compressed point cloud data with a higher compression ratio than other existing methods. In particular, it can tightly control the deviation within the error tolerance range, which demonstrates the superior performance of the proposed algorithm. This algorithm could be used in the data acquisition process of 3D free-form surface scanning measurement to replace other existing on-line point cloud data compression/reduction methods. Author Contributions: All work with relation to this paper has been accomplished by the efforts of all authors. Conceptualization, Y.L. and Y.T.; methodology, Z.H.; software, Y.T.; validation, Y.M. and Z.H.; formal analysis, Y.T.; investigation, Y.M.; resources, Y.M.; data curation, Y.M.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T.; visualization, Z.H.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. Funding: This research was funded by the National Natural Science Foundation of China (Grant Nos. 51505310, 51435011), the Key Research and Development Program of Sichuan Province of China (Grant No. 2018GZ0282) and the Key Laboratory for Precision and Non-traditional Machining of Ministry of Education, Dalian University of Technology (Grant Nos. JMTZ201802, B201802). Conflicts of Interest: The authors declare no conflict of interest. References 1. Galetto, M.; Vezzetti, E. Reverse engineering of free-form surfaces: A methodology for threshold definition in selective sampling. J. Mach. Tools Manuf. 2006, 46, 1079–1086. [CrossRef] 2. Han, Z.H.; Wang, Y.M.; Ma, X.H.; Liu, S.G.; Zhang, X.D.; Zhang, G.X. T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM. Appl. Sci. 2017, 7, 1092. [CrossRef] 3. Ngo, T.D.; Kashani, A.; Imbalzano, G.; Nguyen, K.T.Q.; Hui, D. Additive manufacturing (3D printing): A review of materials, methods, applications and challenges. Compos. Pt. B Eng. 2018, 143, 172–196. [CrossRef] 4. Liu, J.; Bai, D.; Chen, L. 3-D point cloud registration algorithm based on greedy projection triangulation. Appl. Sci. 2018, 8, 1776. [CrossRef] 5. Chen, L.; Jiang, Z.D.; Li, B.; Ding, J.J.; Zhang, F. Data reduction based on bi-directional point cloud slicing for reverse engineering. Key Eng. Mater. 2010, 437, 492–496. [CrossRef] 6. Budak, I.; Hodolic, J.; Sokovic, M. Development of a programme system for data-point pre-processing in Reverse Engineering. J. Mater. Process. Technol. 2005, 162, 730–735. [CrossRef] Appl. Sci. 2018, 8, 2556 17 of 18 7. Yan, R.J.; Wu, J.; Lee, J.Y.; Khan, A.M.; Han, C.S.; Kayacan, E.; Chen, I.M. A novel method for 3D reconstruction: Division and merging of overlapping B-spline surfaces. Comput. Aided Des. 2016, 81, 14–23. [CrossRef] 8. Pal, P.; Ballav, R. Object shape reconstruction through NURBS surface interpolation. Int. J. Prod. Res. 2007, 45, 287–307. [CrossRef] 9. Calì, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. [CrossRef] 10. Zanetti, E.; Aldieri, A.; Terzini, M.; Calì, M.; Franceschini, G.; Bignardi, C. Additively manufactured custom load-bearing implantable devices. Australas. Med. J. 2017, 10. [CrossRef] 11. Cavas-Martinez, F.; Fernandez-Pacheco, D.G.; Canavate, F.J.F.; Velazquez-Blazquez, J.S.; Bolarin, J.M.; Alio, J.L. Study of Morpho-Geometric Variables to Improve the Diagnosis in Keratoconus with Mild Visual Limitation. Symmetry 2018, 10, 306. [CrossRef] 12. Manavella, V.; Romano, F.; Garrone, F.; Terzini, M.; Bignardi, C.; Aimetti, M. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT. Minerva Stomatol. 2017, 66, 81–89. [CrossRef] [PubMed] 13. Aldieri, A.; Terzini, M.; Osella, G.; Priola, A.M.; Angeli, A.; Veltri, A.; Audenino, A.L.; Bignardi, C. Osteoporotic Hip Fracture Prediction: Is T-Score-Based Criterion Enough? A Hip Structural Analysis-Based Model. J. Biomech. Eng. Trans. ASME 2018, 140, 111004. [CrossRef] [PubMed] 14. Jia, Z.Y.; Lu, X.H.; Yang, J.Y. Self-learning fuzzy control of scan tracking measurement in copying manufacture. Trans. Inst. Meas. Control 2010, 32, 307–318. [CrossRef] 15. Wang, Y.Q.; Tao, Y.; Nie, B.; Liu, H.B. Optimal design of motion control for scan tracking measurement: A CMAC approach. Measurement 2013, 46, 384–392. [CrossRef] 16. Li, W.L.; Zhou, L.P.; Yan, S.J. A case study of blade inspection based on optical scanning method. Int. J. Prod. Res. 2015, 53, 2165–2178. [CrossRef] 17. Khameneifar, F.; Feng, H.Y. Extracting sectional contours from scanned point clouds via adaptive surface projection. Int. J. Prod. Res. 2017, 55, 4466–4480. [CrossRef] 18. Budak, I.; Sokovic, M.; Barisic, B. Accuracy improvement of point data reduction with sampling-based methods by Fuzzy logic-based decision-making. Measurement 2011, 44, 1188–1200. [CrossRef] 19. Shi, B.Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. Aided Des. 2011, 43, 910–922. [CrossRef] 20. Feng, C.; Taguchi, Y. FasTFit: A fast T-spline fitting algorithm. Comput. Aided Des. 2017, 92, 11–21. [CrossRef] 21. Meng, X.L.; He, W.T.; Liu, J.Y. An investigation of the high efficiency estimation approach of the large-scale scattered point cloud normal vector. Appl. Sci. 2018, 8, 454. [CrossRef] 22. Song, H.; Feng, H.Y. A progressive point cloud simplification algorithm with preserved sharp edge data. Int. J. Adv. Manuf. Technol. 2009, 45, 583–592. [CrossRef] 23. Chen, L.C.; Hoang, D.C.; Lin, H.I.; Nguyen, T.H. Innovative methodology for multi-view point cloud registration in robotic 3D object scanning and reconstruction. Appl. Sci. 2016, 6, 132. [CrossRef] 24. Macher, H.; Landes, T.; Grussenmeyer, P. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings. Appl. Sci. 2017, 7, 1030. [CrossRef] 25. Han, H.; Han, X.; Sun, F.; Huang, C. Point cloud simplification with preserved edge based on normal vector. Optik 2015, 126, 2157–2162. [CrossRef] 26. Wang, Y.Q.; Tao, Y.; Zhang, H.J.; Sun, S.S. A simple point cloud data reduction method based on Akima spline interpolation for digital copying manufacture. Int. J. Adv. Manuf. Technol. 2013, 69, 2149–2159. [CrossRef] 27. Arpaia, P.; Buzio, M.; Inglese, V. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements. Meas. Sci. Technol. 2010, 21. [CrossRef] 28. Wang, D.; He, C.; Li, X.; Peng, J. Progressive point set surface compression based on planar reflective symmetry analysis. Comput. Aided Des. 2015, 58, 34–42. [CrossRef] 29. Lee, K.H.; Woo, H.; Suk, T. Data reduction methods for reverse engineering. Int. J. Adv. Manuf. Technol. 2001, 17, 735–743. [CrossRef] 30. Ma, X.; Cripps, R.J. Shape preserving data reduction for 3D surface points. Comput. Aided Des. 2011, 43, 902–909. [CrossRef] 31. Smith, J.; Petrova, G.; Schaefer, S. Progressive encoding and compression of surfaces generated from point cloud data. Comput. Graph. 2012, 36, 341–348. [CrossRef] Appl. Sci. 2018, 8, 2556 18 of 18 32. Morell, V.; Orts, S.; Cazorla, M.; Garcia-Rodriguez, J. Geometric 3D point cloud compression. Pattern Recognit. Lett. 2014, 50, 55–62. [CrossRef] 33. Lu, J.C.; Yang, J.K.; Mu, L.C. Automatic tracing measurement and close data collection system of the free-form surfaces. J. Dalian Univ. Technol. 1986, 24, 55–59. (In Chinese) 34. ElKott, D.F.; Veldhuis, S.C. Isoparametric line sampling for the inspection planning of sculptured surfaces. Comput. Aided Des. 2005, 37, 189–200. [CrossRef] 35. Wozniak, A.; Balazinski, A.; Mayer, R. Application of fuzzy knowledge base for corrected measured point determination in coordinate metrology. In Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society, San Diego, CA, USA, 24–27 June 2007. 36. Jia, Z.Y.; Lu, X.H.; Wang, W.; Yang, J.Y. Data sampling and processing for contact free-form surface scan-tracking measurement. Int. J. Adv. Manuf. Technol. 2010, 46, 237–251. [CrossRef] 37. Tao, Y.; Li, Y.; Wang, Y.Q.; Ma, Y.Y. On-line point cloud data extraction algorithm for spatial scanning measurement of irregular surface in copying manufacture. Int. J. Adv. Manuf. Technol. 2016, 87, 1891–1905. [CrossRef] 38. Li, R.J.; Fan, K.C.; Huang, Q.X.; Zhou, H.; Gong, E.M. A long-stroke 3D contact scanning probe for micro/nano coordinate measuring machine. Precis. Eng. 2016, 43, 220–229. [CrossRef] 39. Wang, Y.Q.; Liu, H.B.; Tao, Y.; Jia, Z.Y. Influence of incident angle on distance detection accuracy of point laser probe with charge-coupled device: Prediction and calibration. Opt. Eng. 2012, 51, 083606. [CrossRef] 40. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vis. Comput. 1998, 16, 99–110. [CrossRef] 41. Carmignato, S. Accuracy of industrial computed tomography measurements: Experimental results from an international comparison. CIRP Ann. Manuf. Technol. 2012, 61, 491–494. [CrossRef] 42. Lamberty, A.; Schimmel, H.; Pauwels, J. The study of the stability of reference materials by isochronous measurements. Anal. Bioanal. Chem. 1998, 360, 359–361. [CrossRef] 43. Li, W.D.; Zhou, H.X.; Hong, W. A Hermite inter/extrapolation scheme for MoM matrices over a frequency band. IEEE Antennas Wirel. Propag. Lett. 2009, 8, 782–785. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Sciences Multidisciplinary Digital Publishing Institute

Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement

Applied Sciences , Volume 8 (12) – Dec 10, 2018

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/innovative-methodology-of-on-line-point-cloud-data-compression-for-5qB0WR0Apy

References (43)

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2019 MDPI (Basel, Switzerland) unless otherwise stated
ISSN
2076-3417
DOI
10.3390/app8122556
Publisher site
See Article on Publisher Site

Abstract

applied sciences Article Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement 1 2 1 , 3 , 3 Yan Li , Yuyong Ma , Ye Tao * and Zhengmeng Hou School of Manufacturing Science and Engineering, Sichuan University, Chengdu 610065, China; liyan@scu.edu.cn Aerospace Research Institute of Materials and Processing Technology, China Academy of Launch Vehicle Technology, Beijing 100076, China; myy007391@casctx-1.net.cn Energy Research Center of Lower Saxony (EFZN), 38640 Goslar, Germany; hou@tu-clausthal.de * Correspondence: yetao@scu.edu.cn; Tel.: +49-0152-5155-4369 Received: 23 October 2018; Accepted: 3 December 2018; Published: 10 December 2018 Featured Application: On-line point cloud data compression process for 3D free-form surface contact or non-contact scanning measuring equipment. Abstract: In order to obtain a highly accurate profile of a measured three-dimensional (3D) free-form surface, a scanning measuring device has to produce extremely dense point cloud data with a great sampling rate. Bottlenecks are created owing to inefficiencies in manipulating, storing and transferring these data, and parametric modelling from them is quite time-consuming work. In order to effectively compress the dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents an innovative methodology of an on-line point cloud data compression algorithm for 3D free-form surface scanning measurement. It has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data; next, the data redundancy existing in the compressed point cloud is further identified and eliminated; then, we can get the final compressed point cloud data. Finally, the experiment is conducted, and the results demonstrate that the proposed algorithm is capable of obtaining high-quality data compression results with higher data compression ratios than other existing on-line point cloud data compression/reduction methods. Keywords: data compression; data reduction; free-form surface; point cloud; scanning measurement; redundancy identifying; redundancy eliminating; geometric feature similarity 1. Introduction With the rapid development of modern industry, three-dimensional (3D) free-form surface parts are being utilized more and more widely. These involve, but are not limited to, aviation, aerospace, shipbuilding, automotive, biomedical and home appliance industries [1,2]. Recently, the automated 3D digitization of free-form surface objects has been widely applied in many areas, such as additive manufacturing (3D printing), rapid prototyping, reverse engineering, civil buildings, medical prosthetics and clinical diagnosis [3–13]. Scanning measurement is one of the key technologies for digitizing 3D physical models with free-form surfaces [14–17]. Unfortunately, in order to obtain a high-quality profile of a measured surface, scanning measuring devices have to produce massive amounts of point cloud data with great sampling rates, and not all these points are indispensable [18–20]. Bottlenecks arise from the inefficiencies of storing, manipulating and transferring them [21]. Furthermore, the parametric Appl. Sci. 2018, 8, 2556; doi:10.3390/app8122556 www.mdpi.com/journal/applsci Appl. Sci. 2018, 8, 2556 2 of 18 modelling from these massive amount of point cloud data is a time-consuming task [22–24]. For this reason, compressing the measured point data while maintaining the required accuracy is a crucial task during the scanning measuring process [25]. Herein, the required accuracy is a threshold of distance, which is preset to a constant positive integer before the beginning of scanning measurement. The accuracy of a certain data compression algorithm is characterized by the distance from each sampled point in the initial dense point cloud data to the surface generated by the compressed point cloud. Describing a measured surface with the least point data while guaranteeing a certain data compression accuracy is always an expectation [26,27]. Therefore, a high-quality point cloud data compression algorithm for 3D free-form surface scanning measurement is being pursued constantly [28]. Experts and scholars around the world have been paying more and more attention to this issue, and a number of point cloud data compression/reduction algorithms for free-form/irregular surface scanning measurement have been developed. Lee et al. [29] proposed an algorithm for processing point cloud data obtained by laser scanning devices. This algorithm adopts a one-directional (1D) or bi-directional (2D) non-uniform grid to reduce the amount of point cloud data. Chen et al. [5] presented a data compression method based on a bi-directional point cloud slicing strategy for reverse engineering. This method can preserve local details (geometric features in both two parametric directions) when performing data compression. Ma and Cripps [30] proposed a new data compression algorithm for surface points to preserve the original surface points. The error metric is defined as the relative Hausdorff distance between two principal curvature vector sets for surface shape comparison. After comparison, the difference between the compressed data points and original data points can be obtained. Therefore, redundant points are repeatedly removed until the difference induced exceeds the specified tolerance. Smith, Petrova, and Schaefer [31] presented a progressive encoding and compression method for surfaces generated from point cloud data. At first, an octree is built whose nodes contain planes that are constructed as the least square fit of the data within that node. Then, this octree is pruned to remove redundant data while avoiding topological changes created by merging disjointed linear pieces. Morell et al. [32] presented a geometric 3D point cloud lossy compression system based on plane extraction, which represents the points of each scene plane as a Delaunay triangulation and a set of points/area information. This compression system can be customized to achieve different data compression or accuracy ratios. The above methods have focused on optimizing data compression quality based on building and processing polyhedral models or numerical iterative calculations. Nevertheless, they are all off-line data compression algorithms and can only compress the point cloud data of a whole measured surface after data acquisition. In other words, they cannot perform online data compression during real-time measurement. Data acquisition and data compression processes are completely separate. A large amount of redundant point cloud data occupies a great deal of storage space in scanning measuring devices. Moreover, the transmission and processing of point cloud data still takes up a significant amount of time and hardware resources. This problem has attracted the attention of many scholars and engineers, and they have proposed quite a number of on-line point cloud data compression/reduction methods. Lu et al. [33] adopted the chordal method to compress point cloud data, and realized the on-line data compression of point cloud data during real-time scanning measurement for the first time. ElKott and Veldhuis [34] presented an automatic surface sampling approach based on scanning isoparametric lines. The sampling locations are confirmed by the deviations between the alternative geometry and sampled model, and the location of each sampling line is confirmed by the curvature of the sampled surface model. Wozniak, Balazinski, and Mayer [35] presented a point cloud data compression method based on fuzzy logic and the geometric solution of an arc at each measured point. This is an on-line data compression method and can be used in the surface scanning measuring process of coordinate measuring machines (CMMs). Jia et al. [36] proposed an on-line data compression method based on the equal-error chordal method and isochronous sampling. In order to solve the problem of massive data storage, dual-buffer and dual-thread dynamic storage is adopted. Tao et al. [37] found that the essence of all the above on-line point cloud data compression methods is the chordal method, which specifies that all discrete dense Appl. Sci. 2018, 8, 2556 3 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 3 of 20 point sets are connected by straight segments. Therefore, the surface reconstructed by the compressed methods is the chordal method, which specifies that all discrete dense point sets are connected by point cloud will be full of cusp points, and so we cannot obtain a smooth interpolated surface. In view straight segments. Therefore, the surface reconstructed by the compressed point cloud will be full of of this limitation, they presented an on-line point cloud data extraction algorithm using bi-Akima cusp points, and so we cannot obtain a smooth interpolated surface. In view of this limitation, they spline interpolation. presented an on-line point cloud data extraction algorithm using bi-Akima spline interpolation. Although the above methods implement on-line point cloud data compression, they can only Although the above methods implement on-line point cloud data compression, they can only eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser triangle measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser displacement sensors [39], linear structured light systems [40], industrial computed tomography (CT) triangle displacement sensors [39], linear structured light systems [40], industrial computed systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The geometric feature tomography (CT) systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The similarity between such scanning layers is bound to result in data redundancy, which makes it possible geometric feature similarity between such scanning layers is bound to result in data redundancy, to further compress the point cloud data during the scanning measuring process. Therefore, this study which makes it possible to further compress the point cloud data during the scanning measuring focuses on identifying and eliminating this kind of data redundancy caused by geometric feature process. Therefore, this study focuses on identifying and eliminating this kind of data redundancy similarity between adjacent scanning layers. After that, the massive amount of point cloud data can be caused by geometric feature similarity between adjacent scanning layers. After that, the massive further compressed during the 3D free-form surface measuring process. amount of point cloud data can be further compressed during the 3D free-form surface measuring The contents of this paper consist of four sections. In Section 2, the innovative methodology of process. the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement is The contents of this paper consist of four sections. In Section 2, the innovative methodology of described in detail. In Section 3, the proposed algorithm was tested in the real-time scanning measuring the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement process is descr and ibed compar in det ed ail. with In S existing ection 3methods. , the proposed Finally alg , some orithm wa conclusions s tested in are t drawn he reafr l-t om ime this scapaper nning in measuring process and compared with existing methods. Finally, some conclusions are drawn from Section 4. this paper in Section 4. 2. Innovative Methodology 2. Innovative Methodology As shown in Figure 1, the overall process of on-line point cloud data compression in this work As shown in Figure 1, the overall process of on-line point cloud data compression in this work consists of four steps. in Step 1, the initial point cloud flow is obtained by 3D scanning measuring consists of four steps. In Step 1, the initial point cloud flow is obtained by 3D scanning measuring devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning path is adopted (Figure 2). in Step 2, the initial point cloud data flow is immediately compressed by path is adopted (Figure 2). In Step 2, the initial point cloud data flow is immediately compressed by the chordal method [36] or bi-Akima method [37], both of which compress the amount of point cloud the chordal method [36] or bi-Akima method [37], both of which compress the amount of point data based on the data redundancy in the current single scanning layer. In Step 3, the data redundancy cloud data based on the data redundancy in the current single scanning layer. In Step 3, the data in the compressed point cloud which is obtained in the previous step is further identified. In Step 4, redundancy in the compressed point cloud which is obtained in the previous step is further the identified redundant point data is eliminated, and then we can obtain the final compressed identified. In Step 4, the identified redundant point data is eliminated, and then we can obtain the point cloud. At last, the final compressed data flow is transmitted to the storage space of the final compressed point cloud. At last, the final compressed data flow is transmitted to the storage measurement system. space of the measurement system. Figure 1. The overall process of on-line data compression for 3D free-form surface scanning Figure 1. The overall process of on-line data compression for 3D free-form surface scanning measurement. measurement. Herein, the real-time performance of the proposed data compression algorithm needs to be Herein, the real-time performance of the proposed data compression algorithm needs to be further analyzed and described. The path planning is performed before the start of the scanning further analyzed and described. The path planning is performed before the start of the scanning measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The distance measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The between the adjacent scanning layers is determined by the preset measuring accuracy. The measured distance between the adjacent scanning layers is determined by the preset measuring accuracy. The surface is cut by the scanning layers to form a number of corresponding scanning lines. As shown measured surface is cut by the scanning layers to form a number of corresponding scanning lines. As in Figure 2, there are two planning modes for scanning directions: (i) the progressive scanning shown in Figure 2, there are two planning modes for scanning directions: (i) the progressive mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring device in scanning mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring Step 1 will continuously transmit the initial point cloud data flow to the data compressor in Step 2. device in Step 1 will continuously transmit the initial point cloud data flow to the data compressor in The compressor performs data compression immediately after receiving all initial point data of a Step 2. The compressor performs data compression immediately after receiving all initial point data single scanning layer, rather than waiting for the entire surface to be scanned before performing data of a single scanning layer, rather than waiting for the entire surface to be scanned before performing compression. That is, each time the point cloud data in the current scanning layer is completely data compression. That is, each time the point cloud data in the current scanning layer is completely transmitted to the compressor, the subsequent data compression algorithm is executed immediately. transmitted to the compressor, the subsequent data compression algorithm is executed immediately. Appl. Sci. 2018, 8, 2556 4 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 4 of 20 Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which we Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which call an on-line data compression method. we call an on-line data compression method. Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement. Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement. The The flow flow chart charof t of t thih sis algorithm algorithm is is illustrated illustrated in in F Figur igure e 3, 3, an and d it its s pr principle inciple isis de described scribed in det in detail ail as follows: as follows: Appl. Sci. 2018, 8, x FOR PEER REVIEW 5 of 20 Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm. Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm. 2.1. Data Redundancy Identification In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows: Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line is the current scanning line during the on-line measuring process, and P represents ij , the j th point in scanning line i . If , a shape-preserving piecewise bicubic Hermite curve j ≥ 2 can be built to predict the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1≤<kj and the coordinates of point P be (, xy ,z ) ; then, a series of specific Hermite interpolation ik , kk k polynomials can be determined by Appl. Sci. 2018, 8, 2556 5 of 18 2.1. Data Redundancy Identification In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows: Appl. Sci. 2018, 8, x FOR PEER REVIEW 6 of 20 Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line i is the current scanning line during the on-line measuring process, and P represents the jth point i,j Hx ()=+ yαα ()x y ()x+y'β (x)+y' β ()x ykk k++ 11k k k k+1 k+1 in scanning line i. If j  2, a shape-preserving piecewise bicubic Hermite curve can be built to predict , (1) Hx ()=+ zαα ()x z ()x+z'β (x)+z' β ()x  zkk k++ 11k k k k+1 k+1 the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1  k < j and the coordinates of point where P be (x , y , z ); then, a series of specific Hermite interpolation polynomials can be determined by i,k k k k  xx−− xx kk +1 0 0 α () x =+ 1 2 H (x) = y a (x) + y a (x) + y b (x) +y b (x) y k k k k+1 k+1 k k+1 k  k+1 xx−−x x , (1) kk++ 11 k k 0 0 H (x) = z a (x) + z a (x) + z b (x) + z b (x) k k k+1 k+1 k k k+1 k+1   xx−− xx kk +1 α () x =+ 1 2   k +1 where   xx−−  x x  kk++ 11 k 2k xx xx > k k+1 , (2) > a (x) = 1 + 2 x x x x k+1 k k k+1   >  xx − > k +1 2 > xx β ()xx =−x xx () k+1 k <  kk a (x) = 1 + 2  k+1 xxx − x x x k k+1 k+1 k  kk +1 , (2) xx k+1 2 b (x) = (x x ) k k  x xxx − k k+1  k > β ()xx =−x () 2  > kk ++ 11  xx :  xx − b (x) = (x x )  kk +1  k+1 k+1 x x k+1 k 0 0 0 0 and and the the f first irst deri derivatives vatives of of yy ', y , y ', z , , z z ' , z can ' be caestimated n be estimby ated b they following the following f formulas. ormulas. k k+1 k k k+1 k + 1 k k + 1 Figure Figure 4. 4. The schematic The schematic diagram diagram of data of data red redundancy undancy identi identification. fication. When 1 < k < j: When 1<<kj : y y y y k+1 k k k1  0, i f : yy−−  y y< 0 x kk +1- x x xk k1 0 0 k+1 k k k1 0, if : ⋅< 0 y = f (x ) = , (3) y k k y y y y y y y y k+1 k k k1 k+1 k k k1 xx−−x x  + , i f :   0 kk +1-k k1 2 x x x x x x x x k+1 k k k1 k+1 k k k1 yf''==(x) , (3) ky k  yy−−y y yy− y−y kk++ 1-k k1 kk1 k k-1 +⋅ , if : ≥ 0   z z z z  k+1 k k k1 < 2 xx−−x x xx− x−x 0, i f :  < 0 kk++ 1-k k1 kk1 k k-1  x x x x 0 0 k+1 k k k1 z = f (x ) = . (4) k z k z z z z z z z z k+1 k k k1 k+1 k k k1 + , i f :   0 2 x x x x x x x x k+1 k k k1 k+1 k k k1  zz−−z z kk +1-k k1 0, if : ⋅< 0 When k = 1: xx−−x x  kk +1-k k1 zf''==(x) . (  (4) kz k y y 2 1  z −− z zz z −z zz− 0, i1 f : d  < 0  kk y++ 1-k k1 kk1 k k-1 x x 0 2 1 +⋅ , if : ≥ 0  y = , (5) y  y y y (y y )(y y ) 1  2 1 2 1 2 1 3 2 2 xx−− x x xx− x−x 3 kk ,++ i1- f : d >k 3 k1 & kk1 <k0 k-1  y x x x x (x x )(x x ) 2 1 2 1 2 1 3 2 When k = 1 :  yy − 0, if : d⋅< 0 xx −  21 y ' = , (5) yy−− yy () yy−−(y y)  21 21 21 3 2 3⋅> , if : d 3 & < 0 xx−− xx () x−x(x−x) 21 21 2 1 3 2  Appl. Sci. 2018, 8, 2556 6 of 18 z z 2 1 0, i f : d  < 0 x x 0 2 1 z = , (6) 1 (z z )(z z ) z z z z 2 1 2 1 2 1 3 2 3 , i f : d > 3 & < 0 j j x x x x (x x )(x x ) 2 1 2 1 2 1 3 2 in which (x +x 2x )(y y ) (x x )(y y ) 3 2 1 2 1 2 1 3 2 d = (x x )(x x ) (x x )(x x ) 2 1 3 1 3 2 3 1 . (7) (x +x 2x )(z z ) (x x )(z z ) 3 2 2 2 3 2 1 1 1 d = (x x )(x x ) (x x )(x x ) 2 1 3 1 3 2 3 1 When k = j: y y j j1 0, i f : d  < 0 x x j j1 y = , (8) j y y y y (y y )(y y ) j j1 j j1 j j1 j1 j2 3 , i f : e > 3 & < 0 x x x x (x x )(x x ) j j1 j j1 j j1 j1 j2 z z j j1 0, i f : d  < 0 x x j j1 z = , (9) j z z z z (z z )(z z ) j j1 j j1 j j1 j1 j2 3 , i f : je j > 3 & < 0 x x x x (x x )(x x ) j j1 j j1 j j1 j1 j2 in which (2x x x )(y y ) (x x )(y y ) j j1 j2 j j1 j j1 j1 j2 e = (x x )(x x ) (x x )(x x ) j j1 j j2 j1 j2 j j2 . (10) (2x x x )(z z ) (x x )(z z ) j j1 j2 j j1 j j1 j1 j2 e = (x x )(x x ) (x x )(x x ) j j1 j j2 j1 j2 j j2 Herein, based on the compressed point cloud data flow from Step 2, the shape-preserving piecewise bicubic Hermite polynomials can be created according to the above algorithm. Then, Hermite extrapolation is performed to create a predicted curve, which is marked in blue as shown in Figure 4, and its analytical formula can be described as follows: 0 0 H (x) = y a (x) + y a (x) + y b (x) + y b (x) y j1 j1 j j j1 j j1 j . (11) 0 0 H (x) = z a (x) + z a (x) + z b (x) + z b (x) j1 j1 j j j1 j1 j j After that, an estimated point P is created to move along the predicted curve with a stepping est distance of l. P is the starting point of P . Meanwhile, a bounding sphere is built with point P as est est i,j the center. The radius of the sphere is R = kh , (12) s ph ls in which k 2 [1, 2] is the radius adjustment coefficient, and h is the distance between two parallel ls scannning layers. As shown in Figure 4, the predicted curve with estimated point P are used to est search for the neighbor point P from the previous scanning line i1. The necessary and sufficient nb conditions for point P as the neighbor point of P are P P  R , which means that P is inside est est nb nb s ph nb the bounding sphere with point P as its center. At the very beginning, P coincides with P . At this est est i,j point, there are two possibilities: (i) P is inside the bounding sphere (i.e., P P  R ), or (ii) i1,u i1,u i,j s ph P is outside the bounding sphere. In case (i), P is the first found neighbor point. As P moves i1,u i1,u est along the scanning direction with a stepping distance of l, if P P < P P , then P is est i1,u i,j i1,u i1,u the neighbor point of P ; otherwise, discard point P , as it is not the neighbor point of P but of est i1,u est P . In case (ii), there is no operation because no neighbor point has been found. After case (i) or case i,j (ii) is completed, point P continues to move forward along the scanning direction until the neighbor est point P of P is found; if the neighbor point cannot be found, the search is stopped. est nb If the neighbor point P is found in line i 1 (e.g., P in Figure 4), then a new bounding nb i1,u+1 sphere is built with P as the center and the radius is R . After that, we use this new bounding i1,u+1 s ph sphere to search for the neighbor point of P in line i 2; and if the neighbor point cannot be i1,u+1 found, we stop searching. Next, we take the new neighbor point in line i 2 (e.g., P ) as a new i2,v+1 center to build a bounding sphere and repeat the above process until we find three neighbor points in different scanning lines (e.g., P , P , P in Figure 4). i1,u+1 i2,v+1 i3,w+1 Appl. Sci. 2018, 8, 2556 7 of 18 Based on the neighbor point set {P , P , P }, the coordinates of estimated point P i1,u+1 i2,v+1 i3,w+1 est can be fixed uniquely. As shown in Figure 4, a bi-cubic Hermite curve is built, and it can be expressed as 0 0 H (y) = x a (y) + x a (y) + x b (y) + x b (y) x i2 i2 i1 i1 i2 i2 i1 i1 , (13) 0 0 H (y) = z a (y) + z a (y) + z b (y) + z b (y) i2 i2 i1 i1 i2 i2 i1 i1 in which y is an independent variable; a (y), a (y), b (y), b (y) are obtained by Equation (2); i1 i2 i1 i2 0 0 0 0 x , z are acquired by Equations (3) and (4); and x , z are obtained by Equations (8)–(10). i2 i2 i1 i1 Obviously, the bicubic Hermite curve must be in the curved surface with the equation 0 0 H (y) = x a (y) + x a (y) + x b (y) + x b (y), (14) i2 i2 i1 i1 i2 i2 i1 i1 and the predicted curve will pass through this curved surface. Therefore, estimated point P can be est fixed at the intersection of the predicted curve and the curved surface which is described in Equation (14). That is, the coordinates of estimated point P (x , y , z ) can be determined by Equations (11) and (14). est est est est 2.2. Data Redundancy Elimination After the coordinates of estimated point P are determined, we use P to replace P in est est i,j+1 scanning line i. Afterwards, the new point set that contains P is used for bi-Akima interpolation, est and there is a deviation h between the interpolated curve and each initial sampled point Q , where i i,k k is the scanning line number and k is the serial number of initial point cloud in line i. As mentioned earlier, the initial point cloud is obtained by 3D scanning measuring devices using the isochronous or equidistant sampling method in Step 1 as shown in Figure 1. The deviation h can be obtained by i,k 2 2 2 h = MIN (s) = MIN (x x ) + (y y ) + (z z ) , (15) i,k k k k x2(X ,X ) x2(X ,X ) j j+1 j j+1 where point Q (x , y , z ) is an initial sampled point between P (X , Y , Z ) and P (X , Y , Z ), k k k k i,j j j j i,j+1 j+1 j+1 j+1 and P (x, y, z) is the point in interpolated curve that makes the distance S shortest. Then, the max curv deviation d of the whole curve (i.e., from P to P ) can be calculated by the following formula: max est i,1 d = MAX(h ), (16) max i,k which is compared with the required accuracy #. If d > #, discard point P . If d < #, delete max est max current compressed point P which is input from Step 2. Next, create an estimative flag F = 1 i,j+1 i,j+1 to replace point P . This flag takes up only one bit of data storage space. After completing the i,j+1 above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j = j + 1, build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P to loop through the above data redundancy est identification and elimination process until P is the end point of the current scanning line i or the i,j data sampling is over. In addition, when P is the end point of line i, make i = i + 1 and continue to i,j loop the above algorithm until the measurement is completed. 3. Experimental Results In order to verify the feasibility of the proposed methodology, some experiments were performed in this section. 3.1. Test A The on-line point cloud data compression algorithm was tested in the industrial real-time measuring process and compared with existing methods (chordal method and bi-Akima method). Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 20 shortest. Then, the max deviation d of the whole curve (i.e., from P to P ) can be calculated max i ,1 est by the following formula: dh = MAX( ) , (16) max ik , which is compared with the required accuracy . If , discard point . If , delete ε d > ε P d < ε max est max current compressed point P which is input from Step 2. Next, create an estimative flag F =1 ij,1 + ij,+1 to replace point P . This flag takes up only one bit of data storage space. After completing the ij,1 + above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j =+ j 1 , build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P to loop through the above data est redundancy identification and elimination process until P is the end point of the current scanning ij , line i or the data sampling is over. In addition, when P is the end point of line i, make ii =+ 1 ij , and continue to loop the above algorithm until the measurement is completed. 3. Experimental Results In order to verify the feasibility of the proposed methodology, some experiments were performed in this section. Appl. Sci. 2018, 8, 2556 8 of 18 3.1. Test A The on-line point cloud data compression algorithm was tested in the industrial real-time The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial measuring process and compared with existing methods (chordal method and bi-Akima method). The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as shown computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer (OEM) shown in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer application that runs on the host computer of the CNC system. The product model of the contact 3D (OEM) application that runs on the host computer of the CNC system. The product model of the scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical characteristics contact 3D scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical of the measuring instrument are shown in Table 1. characteristics of the measuring instrument are shown in Table 1. Appl. Sci. 2018, 8, x FOR PEER REVIEW 10 of 20 (a) (b) (c) (d) Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part. numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part. Table 1. Detailed technical characteristics of the measuring system. Table 1. Detailed technical characteristics of the measuring system. Technical Characteristics Values Technical Characteristics Values Scope of X axis 2400 mm Scope of X axis 2400 mm Positioning accuracy of X axis 0.019 mm/1000 mm Positioning accuracy of X axis 0.019 mm/1000 mm Repeatability of X axis 0.016 mm/1000 mm Repeatability of X axis 0.016 mm/1000 mm Scope of Z axis 1200 mm Scope of Z axis 1200 mm Positioning Position accuracy ing accurac of Z axis y of Z axis 0.0.010 010 mm/1000 mm/1000 mm mm Repeatability of Z axis 0.003 mm/1000 mm Repeatability of Z axis 0.003 mm/1000 mm Positioning accuracy of C axis 6.05” Positioning accuracy of C axis 6.05″ Repeatability of C axis 2.22” Repeatability of C axis 2.22″ Measuring range of scanning probe 1 mm Measuring range of scanning probe ±1 mm Accuracy of scanning probe 8 m Accuracy of scanning probe ±8 μm Repeatability of scanning probe 4 m Repeatability of scanning probe ±4 μm Stylus length of probe 100 mm/150 mm/200 mm Stylus length of probe 100 mm/150 mm/200 mm Contact force (with stylus of 200 mm) 1.6 N/mm Contact force (with stylus of 200 mm) 1.6 N/mm Weight of scanning probe 1.8 kg Weight of scanning probe 1.8 kg The measured part is a half-ellipsoidal surface which is welded together by seven pieces of The measured part is a half-ellipsoidal surface which is welded together by seven pieces of thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638. Appl. Sci. 2018, 8, 2556 9 of 18 semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638. Appl. Sci. 2018, 8, x FOR PEER REVIEW 11 of 20 Figure 6. Spatial distribution of initial point cloud data. Figure 6. Spatial distribution of initial point cloud data. Using the same initial point cloud data set as shown in Figure 6, the comparison of data Using the same initial point cloud data set as shown in Figure 6, the comparison of data compr compression ession performance performance is is ma made de between the pr between the proposed oposed m method, ethod, chord chordal al met method hod and andbi-Akima bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the results results o of the f thdata e datcompr a compres ession sion per performance formance including includin the g t number he number of points of point andsdata and dat compr a compr essione ratio, ssion ratio, where the compression ratio is defined as the ratio between the uncompressed size and where the compression ratio is defined as the ratio between the uncompressed size and compressed size: compressed size: Uncompressed Size Number of Initial Points Compression Ratio = = . (17) Uncompressed Size Number of Initial Points Compressed Size Number of Compressed Points Compression Ratio = = . (17) Compressed Size Number of Compressed Points Obviously, the proposed method has a higher data compression ratio than the chordal method Obviously, the proposed method has a higher data compression ratio than the chordal method and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the same required accuracy. The number of data points obtained by the proposed method is about half of same required accuracy. The number of data points obtained by the proposed method is about half that obtained by the bi-Akima method under the same required accuracy. of that obtained by the bi-Akima method under the same required accuracy. Table 2. Compression performance under different required accuracies. Table 2. Compression performance under different required accuracies. Number of Points Compression Ratio Required Number of Points Compression Ratio Required Accuracy (mm) Chordal Bi-Akima Proposed Chordal Bi-Akima Proposed Accuracy Chordal Bi-Akima Proposed Chordal Bi-Akima Proposed Method Method Method Method Method Method (mm) Method Method Method Method Method Method 0.001 237,363 122,929 67,448 1.15 2.22 4.04 0.002 189,824 120,952 67,121 1.44 2.25 4.06 0.001 237,363 122,929 67,448 1.15 2.22 4.04 0.005 152,674 110,175 63,813 1.79 2.47 4.27 0.002 189,824 120,952 67,121 1.44 2.25 4.06 0.01 136,027 93,588 51,062 2.00 2.91 5.34 0.005 152,674 110,175 63,813 1.79 2.47 4.27 0.02 123,891 71,629 41,862 2.20 3.81 6.51 0.050.01 103,205 136,027 9 44,072 3,588 528,837 1,062 2.02.64 0 2.91 6.19 5.34 9.45 0.1 87,008 27,894 15,974 3.13 9.77 17.07 0.02 123,891 71,629 41,862 2.20 3.81 6.51 0.2 61,124 12,191 7102 4.46 22.36 38.39 0.05 103,205 44,072 28,837 2.64 6.19 9.45 0.5 28,473 5594 3140 9.58 48.74 86.83 0.1 87,008 27,894 15,974 3.13 9.77 17.07 1 9029 3969 2217 30.20 68.69 122.99 0.2 61,124 12,191 7102 4.46 22.36 38.39 0.5 28,473 5594 3140 9.58 48.74 86.83 1 9029 3969 2217 30.20 68.69 122.99 Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal Appl. Sci. 2018, 8, 2556 10 of 18 Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio Appl. Sci. 2018, 8, x FOR PEER REVIEW 12 of 20 increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method with the bi-Akima method in the subsequent experiments. method with the bi-Akima method in the subsequent experiments. Figure 7. Data compression ratios under different required accuracies. Figure 7. Data compression ratios under different required accuracies. To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of compressed between the proposed method and bi-Akima method by displaying spatial distributions of point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed point sets under different required accuracies. Subfigures a, d, g and j show the point compressed by bi-Akima method while subfigures b, e, h and k give the point cloud distribution after cloud distribution compressed by bi-Akima method while subfigures b, e, h and k give the point data redundancy identification by the proposed method, with the identified redundant points marked cloud distribution after data redundancy identification by the proposed method, with the identified in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures redu show ndan the t pdistributions oints markeof d i the n r final ed. I compr n sub essed figurpoint es c, f cloud , i adata. nd l, By the i contrast, dentifwe ied redunda can clearlynobserve t points are the difference of point cloud density between these two methods under the same required accuracy. eliminated. These subfigures show the distributions of the final compressed point cloud data. By Take subfigures g–i, for example: when using the bi-Akima method, we can observe that there are contrast, we can clearly observe the difference of point cloud density between these two methods many curves roughly along the welded region (Figure 8g), because the bi-Akima method can only deal under the same required accuracy. Take subfigures g–i, for example: when using the bi-Akima with the point set in the current scanning line and the data redundancy outside the current scanning method, we can observe that there are many curves roughly along the welded region (Figure 8g), line cannot be eliminated. With the involvement of our proposed method, redundant data points are because the bi-Akima method can only deal with the point set in the current scanning line and the identified and marked in red (Figure 8h) and the data redundancy in the adjacent scanning layers is data redundancy outside the current scanning line cannot be eliminated. With the involvement of eliminated and the final compressed point cloud data is obtained (Figure 8i). our proposed method, redundant data points are identified and marked in red (Figure 8h) and the To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of data redundancy in the adjacent scanning layers is eliminated and the final compressed point cloud deviation between each initial sampled point and the interpolated surface obtained from the final data is obtained (Figure 8i). compressed point cloud data under different required accuracies. As can be seen, all the deviations are To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of within the allowable range of required accuracy. Our method can tightly control the deviation within deviat theion bet error tolerance ween earange ch init (i.e., ial sthe ampled deviation point between and theach e interpo initial late sampled d surface point obtaine and interpolation d from the final curve is less than or equal to the required accuracy). In addition, deviations are far lower than the compressed point cloud data under different required accuracies. As can be seen, all the deviations required accuracy in most of the measured area. In Figure 9d, there is an interesting and noteworthy are within the allowable range of required accuracy. Our method can tightly control the deviation phenomenon: the upper right sector has a higher deviation. As mentioned earlier, the measured within the error tolerance range (i.e., the deviation between each initial sampled point and part is a large thin-walled surface which is welded together by seven pieces of aluminum alloy sheet interpolation curve is less than or equal to the required accuracy). In addition, deviations are far (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its size is very large lower than the required accuracy in most of the measured area. In Figure 9d, there is an interesting (the semi-major axis of the ellipse is 1450 mm). The part has undergone great deformation after and noteworthy phenomenon: the upper right sector has a higher deviation. As mentioned earlier, welding. There is a large and random deviation between each welded part and the original design the measured part is a large thin-walled surface which is welded together by seven pieces of size. According to past experience, the maximum deviation in a local section can even reach 3 mm. aluminum alloy sheet (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its Consequently, we infer that the upper right sector has a higher deviation because of deformation in size is very large (the semi-major axis of the ellipse is 1450 mm). The part has undergone great this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy deformation after welding. There is a large and random deviation between each welded part and the original design size. According to past experience, the maximum deviation in a local section can even reach 3 mm. Consequently, we infer that the upper right sector has a higher deviation because of deformation in this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy ε =1 mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range. Appl. Sci. 2018, 8, 2556 11 of 18 #= 1 mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 20 formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range. Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 20 Figure 8. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε =0.001 mm ; (b) redundancy identification, ε =0.001 mm ; (c) Figure 8. Spatial distributions of compressed point cloud data under different required accuracies #: Figure 8. Spatial distributions of compressed point cloud data under different required accuracies redundancy elimination, ε =0.001 mm ; (d) bi-Akima compression, ε =0.01 mm ; (e) redundancy (a) bi-Akima compression, # = 0.001 mm; (b) redundancy identification, # = 0.001 mm; (c) redundancy ε : (a) bi-Akima compression, ε =0.001 mm ; (b) redundancy identification, ε =0.001 mm ; (c) elimination, # = 0.001 mm; (d) bi-Akima compression, # = 0.01 mm; (e) redundancy identification, ε =0.01 mm ε =0.01 mm identification, ; (f) redundancy elimination, ; (g) bi-Akima compression, ε =0.001 mm ε =0.01 mm redundancy elimination, ; (d) bi-Akima compression, ; (e) redundancy # = 0.01 mm; (f) redundancy elimination, # = 0.01 mm; (g) bi-Akima compression, # = 0.1 mm; ε =0.1 mm ε =0.1 mm ε =0.1 mm ; (h) redundancy identification, ; (i) redundancy elimination, ; (j) identification, ε =0.01 mm ; (f) redundancy elimination, ε =0.01 mm ; (g) bi-Akima compression, (h) redundancy identification, # = 0.1 mm; (i) redundancy elimination, # = 0.1 mm; (j) bi-Akima bi-Akima compression, ε =1 mm ; (k) redundancy identification, ε =1 mm ; (l) redundancy ε =0.1 mm ; (h) redundancy identification, ε =0.1 mm ; (i) redundancy elimination, ε =0.1 mm ; (j) compression, # = 1 mm; (k) redundancy identification, # = 1 mm; (l) redundancy elimination, elimination, ε =1 mm . # = 1 mm. ε =1 mm ε =1 mm bi-Akima compression, ; (k) redundancy identification, ; (l) redundancy elimination, ε =1 mm . Figure 9. Cont. Appl. Sci. 2018, 8, 2556 12 of 18 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 ε =0.001 mm Figure 9. Figure Spati 9. Spatial al didistributions stributions of dev of deviation iation uunder nder different different required accuracies required accuracies #:ε(a : ( ) a#) = 0.001 mm; ; (b) # = 0.01 mm; (c) # = 0.1 mm; (d) # = 1 mm. ε =0.01 mm ε =0.1 mm ε =1 mm (b) ; (c) ; (d) . 3.2. Test B 3.2. Test B The overall structure of the model in Test A is relatively simple. In order to further verify the The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. tested model Figure is 11a p shows iece o the f jeinitial welry, point which cloud is inl data aid wit acquisition h 30 diam result. ondsThe of d pr iff ogr erent essive sizes scanning . mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ε =0.001 mm ; performance, including the number of points and data compression ratio. Obviously, the proposed ε =0.01 mm ε =0.1 mm ε =1 mm (b) ; (c) ; (d) . method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same 3.2. Test B required accuracy. Figure 12 provides the comparison of the compression ratios between these two methods under The overall structure of the model in Test A is relatively simple. In order to further verify the different required accuracies. With the decrease in accuracy requirements, the compression ratio universality and adaptability of the proposed method, we chose a more complex surface model with Figure 10. The tested complex surface model: jewelry. increases for all methods; however, for all levels of required accuracy, our proposed compression a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the method manifests a superior compression ratio than the bi-Akima method. tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 10. The tested complex surface model: jewelry. Figure 10. The tested complex surface model: jewelry. Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode Figure 13 visually illustrates the difference between the proposed method and bi-Akima method and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal by displaying spatial distributions of the compressed point sets under different required accuracies. direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between Subfigures a, d, g and j show the point cloud distribution compressed by the bi-Akima method, adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 11. Spatial distribution of initial point cloud data. Figure 11. Spatial distribution of initial point cloud data. Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 20 ε =0.001 mm Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ; (b) ε =0.01 mm ; (c) ε =0.1 mm ; (d) ε =1 mm . 3.2. Test B The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. Appl. Sci. 2018, 8, 2556 13 of 18 while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the Figure 10. The tested complex surface model: jewelry. vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode outside the current scanning line cannot be eliminated. With the involvement of our proposed method, and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal redundant data points are identified and marked in red (Figure 13k), the data redundancy in adjacent direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between Appl. Sci. 2018, 8, x FOR PEER REVIEW 15 of 20 scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 13l). adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression performance, including the number of points and data compression ratio. Obviously, the proposed method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same required accuracy. Table 3. Compression performance under different required accuracies. Required Number of Points Compression Ratio Accuracy Bi-Akima Proposed Bi-Akima Proposed (mm) Method Method Method Method 0.001 18,906 8516 3.35 7.44 0.002 16,857 7609 3.76 8.33 0.005 14,323 6563 4.42 9.66 Figure 11. Spatial distribution of initial point cloud data. Figure 11. Spatial distribution of initial point cloud data. 0.01 12,432 5743 5.10 11.04 Table 3. Compression performance under different required accuracies. 0.02 10,720 5007 5.91 12.66 0.05 8767 4232 7.23 14.98 Number of Points Compression Ratio Required Accuracy (mm) 0.1 7190 3535 8.81 17.93 Bi-Akima Method Proposed Method Bi-Akima Method Proposed Method 0.2 5892 2974 10.76 21.31 0.001 18,906 8516 3.35 7.44 0.002 16,857 7609 3.76 8.33 0.5 4625 2412 13.70 26.28 0.005 14,323 6563 4.42 9.66 1 4204 2213 15.08 28.64 0.01 12,432 5743 5.10 11.04 0.02 10,720 5007 5.91 12.66 0.05 8767 4232 7.23 14.98 Figure 12 provides the comparison of the compression ratios between these two methods under 0.1 7190 3535 8.81 17.93 different required accuracies. With the decrease in accuracy requirements, the compression ratio 0.2 5892 2974 10.76 21.31 0.5 4625 2412 13.70 26.28 increases for all methods; however, for all levels of required accuracy, our proposed compression 1 4204 2213 15.08 28.64 method manifests a superior compression ratio than the bi-Akima method. Figure 12. Figure 12. Data Data com compr press ession ion r ratios atios u under nder dif differ ferent requ ent requir ired accu ed accuracies. racies. Figure 13 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of the compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by the bi-Akima method, while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and Appl. Sci. 2018, 8, x FOR PEER REVIEW 16 of 20 Appl. marked Sci. 2018 in, re 8, 2556 d (Figure 13k), the data redundancy in adjacent scanning layers is eliminated a 14 nd t of 18 he final compressed point cloud data is obtained (Figure 13l). Figure 13. Spatial distributions of compressed point cloud data under different required accuracies Figure 13. Spatial distributions of compressed point cloud data under different required accuracies #: ε =0.001 mm ε =0.001 mm ε : (a) bi-Akima compression, ; (b) redundancy identification, ; (c) (a) bi-Akima compression, # = 0.001 mm; (b) redundancy identification, # = 0.001 mm; (c) redundancy redundancy elimination, ε =0.001 mm ; (d) bi-Akima compression, ε =0.01 mm ; (e) redundancy elimination, # = 0.001 mm; (d) bi-Akima compression, # = 0.01 mm; (e) redundancy identification, ε =0.01 mm ε =0.01 mm # identification = 0.01 mm; , (f) redundancy ; (f) re elimination, dundancy# elim = 0.01 inatio mmn, ; (g) bi-Akima; ( compr g) bi-Aki ession, ma c # o=mpres 0.1 mm sion, ; (h) redundancy identification, # = 0.1 mm; (i) redundancy elimination, # = 0.1 mm; (j) bi-Akima ε =0.1 mm ; (h) redundancy identification, ε =0.1 mm ; (i) redundancy elimination, ε =0.1 mm ; (j) compression, # = 1 mm; (k) redundancy identification, # = 1 mm; (l) redundancy elimination, bi-Akima compression, ε =1 mm ; (k) redundancy identification, ε =1 mm ; (l) redundancy # = 1 mm. ε =1 mm elimination, . In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the from the final compressed point cloud data under different required accuracies. As can be seen, all deviations are within the allowable range of required accuracy, which proves that the proposed method the deviations are within the allowable range of required accuracy, which proves that the proposed can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial method can tightly control the deviation within the error tolerance range (i.e., the deviation between sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far less than the required accuracy in most of the measured area. Appl. Sci. 2018, 8, x FOR PEER REVIEW 17 of 20 each initial sampled point and interpolation curve is less than or equal to the required accuracy). In Appl. Sci. 2018, 8, 2556 15 of 18 addition, deviations are far less than the required accuracy in most of the measured area. ε =0.001 mm Figure 14. Spatial distributions of deviation under different required accuracies ε : (a) ; Figure 14. Spatial distributions of deviation under different required accuracies #: (a) # = 0.001 mm; (b (b ) ) # = ε =0 0.01 .01 m mm; m ; ((c c) ) #ε= =00.1 .1 m mm; m ; ((d d) ) #ε= =11 m mm. m . 4. Discussion 4. Discussion The experimental results in Section 3 indicate that the proposed on-line point cloud data The experimental results in Section 3 indicate that the proposed on-line point cloud data compression algorithm for free-form surface scanning measurement has the following features: compression algorithm for free-form surface scanning measurement has the following features: It can further compress point cloud data and obtain a higher data compression ratio than the • It can further compress point cloud data and obtain a higher data compression ratio than the existing methods under the same required accuracy. Its compression performance is obviously existing methods under the same required accuracy. Its compression performance is obviously superior to the bi-Akima and chordal methods; superior to the bi-Akima and chordal methods; It is capable of tightly controlling the deviation within the error tolerance range, and deviations in • It is capable of tightly controlling the deviation within the error tolerance range, and deviations most measured area are far less than the required accuracy; in most measured area are far less than the required accuracy; Test A preliminarily verifies the application feasibility of the proposed method in an industrial • Test A preliminarily verifies the application feasibility of the proposed method in an industrial environment. Test B demonstrates that the method is equally effective for complex surfaces with environment. Test B demonstrates that the method is equally effective for complex surfaces a large number of details, edges and sharp features, and it has stable performance; with a large number of details, edges and sharp features, and it has stable performance; The proposed method has the potential to be applied to industrial environments to replace • The proposed method has the potential to be applied to industrial environments to replace traditional on-line point cloud data compression methods (bi-Akima and chordal methods). traditional on-line point cloud data compression methods (bi-Akima and chordal methods). Its Its potential applications may be in the real-time measurement processes of scanning devices potential applications may be in the real-time measurement processes of scanning devices such such as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, linear linear structured light systems, industrial CT systems, etc. The application feasibility of this structured light systems, industrial CT systems, etc. The application feasibility of this method method needs to be further confirmed in subsequent case studies. needs to be further confirmed in subsequent case studies. However, the proposed method is not perfect and still has the following limitations. In future work, the following aspects need to be further developed: Appl. Sci. 2018, 8, 2556 16 of 18 This method can only handle 3D point cloud data streams and is not suitable for processing point cloud data containing additional high-dimensional information (e.g., 3D point cloud data with grayscale or color information). We will try to solve the above problem in our future research work; This method can only compress the point cloud data stream which is scanned layer by layer. If the 3D point cloud is randomly sampled and there are no regular scan lines (e.g., 3D measurement with speckle-structure light), our method cannot perform effective data compression. It is a huge challenge to solve the above problems. 5. Conclusions In an attempt to effectively compress dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents a novel on-line point cloud data compression algorithm which has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data obtained by 3D scanning measuring devices. Next, the data redundancy in the compressed point cloud obtained in the previous stage is further identified and eliminated, and then we can obtain the final compressed point cloud data. Finally, the proposed on-line point cloud data compression algorithm was tested in the real-time scanning measuring process and compared with existing methods (the chordal method and bi-Akima method). The experimental results have preliminarily verified the application feasibility of our proposed method in industrial environment, and shown that it is capable of obtaining high-quality compressed point cloud data with a higher compression ratio than other existing methods. In particular, it can tightly control the deviation within the error tolerance range, which demonstrates the superior performance of the proposed algorithm. This algorithm could be used in the data acquisition process of 3D free-form surface scanning measurement to replace other existing on-line point cloud data compression/reduction methods. Author Contributions: All work with relation to this paper has been accomplished by the efforts of all authors. Conceptualization, Y.L. and Y.T.; methodology, Z.H.; software, Y.T.; validation, Y.M. and Z.H.; formal analysis, Y.T.; investigation, Y.M.; resources, Y.M.; data curation, Y.M.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T.; visualization, Z.H.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. Funding: This research was funded by the National Natural Science Foundation of China (Grant Nos. 51505310, 51435011), the Key Research and Development Program of Sichuan Province of China (Grant No. 2018GZ0282) and the Key Laboratory for Precision and Non-traditional Machining of Ministry of Education, Dalian University of Technology (Grant Nos. JMTZ201802, B201802). Conflicts of Interest: The authors declare no conflict of interest. References 1. Galetto, M.; Vezzetti, E. Reverse engineering of free-form surfaces: A methodology for threshold definition in selective sampling. J. Mach. Tools Manuf. 2006, 46, 1079–1086. [CrossRef] 2. Han, Z.H.; Wang, Y.M.; Ma, X.H.; Liu, S.G.; Zhang, X.D.; Zhang, G.X. T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM. Appl. Sci. 2017, 7, 1092. [CrossRef] 3. Ngo, T.D.; Kashani, A.; Imbalzano, G.; Nguyen, K.T.Q.; Hui, D. Additive manufacturing (3D printing): A review of materials, methods, applications and challenges. Compos. Pt. B Eng. 2018, 143, 172–196. [CrossRef] 4. Liu, J.; Bai, D.; Chen, L. 3-D point cloud registration algorithm based on greedy projection triangulation. Appl. Sci. 2018, 8, 1776. [CrossRef] 5. Chen, L.; Jiang, Z.D.; Li, B.; Ding, J.J.; Zhang, F. Data reduction based on bi-directional point cloud slicing for reverse engineering. Key Eng. Mater. 2010, 437, 492–496. [CrossRef] 6. Budak, I.; Hodolic, J.; Sokovic, M. Development of a programme system for data-point pre-processing in Reverse Engineering. J. Mater. Process. Technol. 2005, 162, 730–735. [CrossRef] Appl. Sci. 2018, 8, 2556 17 of 18 7. Yan, R.J.; Wu, J.; Lee, J.Y.; Khan, A.M.; Han, C.S.; Kayacan, E.; Chen, I.M. A novel method for 3D reconstruction: Division and merging of overlapping B-spline surfaces. Comput. Aided Des. 2016, 81, 14–23. [CrossRef] 8. Pal, P.; Ballav, R. Object shape reconstruction through NURBS surface interpolation. Int. J. Prod. Res. 2007, 45, 287–307. [CrossRef] 9. Calì, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. [CrossRef] 10. Zanetti, E.; Aldieri, A.; Terzini, M.; Calì, M.; Franceschini, G.; Bignardi, C. Additively manufactured custom load-bearing implantable devices. Australas. Med. J. 2017, 10. [CrossRef] 11. Cavas-Martinez, F.; Fernandez-Pacheco, D.G.; Canavate, F.J.F.; Velazquez-Blazquez, J.S.; Bolarin, J.M.; Alio, J.L. Study of Morpho-Geometric Variables to Improve the Diagnosis in Keratoconus with Mild Visual Limitation. Symmetry 2018, 10, 306. [CrossRef] 12. Manavella, V.; Romano, F.; Garrone, F.; Terzini, M.; Bignardi, C.; Aimetti, M. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT. Minerva Stomatol. 2017, 66, 81–89. [CrossRef] [PubMed] 13. Aldieri, A.; Terzini, M.; Osella, G.; Priola, A.M.; Angeli, A.; Veltri, A.; Audenino, A.L.; Bignardi, C. Osteoporotic Hip Fracture Prediction: Is T-Score-Based Criterion Enough? A Hip Structural Analysis-Based Model. J. Biomech. Eng. Trans. ASME 2018, 140, 111004. [CrossRef] [PubMed] 14. Jia, Z.Y.; Lu, X.H.; Yang, J.Y. Self-learning fuzzy control of scan tracking measurement in copying manufacture. Trans. Inst. Meas. Control 2010, 32, 307–318. [CrossRef] 15. Wang, Y.Q.; Tao, Y.; Nie, B.; Liu, H.B. Optimal design of motion control for scan tracking measurement: A CMAC approach. Measurement 2013, 46, 384–392. [CrossRef] 16. Li, W.L.; Zhou, L.P.; Yan, S.J. A case study of blade inspection based on optical scanning method. Int. J. Prod. Res. 2015, 53, 2165–2178. [CrossRef] 17. Khameneifar, F.; Feng, H.Y. Extracting sectional contours from scanned point clouds via adaptive surface projection. Int. J. Prod. Res. 2017, 55, 4466–4480. [CrossRef] 18. Budak, I.; Sokovic, M.; Barisic, B. Accuracy improvement of point data reduction with sampling-based methods by Fuzzy logic-based decision-making. Measurement 2011, 44, 1188–1200. [CrossRef] 19. Shi, B.Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. Aided Des. 2011, 43, 910–922. [CrossRef] 20. Feng, C.; Taguchi, Y. FasTFit: A fast T-spline fitting algorithm. Comput. Aided Des. 2017, 92, 11–21. [CrossRef] 21. Meng, X.L.; He, W.T.; Liu, J.Y. An investigation of the high efficiency estimation approach of the large-scale scattered point cloud normal vector. Appl. Sci. 2018, 8, 454. [CrossRef] 22. Song, H.; Feng, H.Y. A progressive point cloud simplification algorithm with preserved sharp edge data. Int. J. Adv. Manuf. Technol. 2009, 45, 583–592. [CrossRef] 23. Chen, L.C.; Hoang, D.C.; Lin, H.I.; Nguyen, T.H. Innovative methodology for multi-view point cloud registration in robotic 3D object scanning and reconstruction. Appl. Sci. 2016, 6, 132. [CrossRef] 24. Macher, H.; Landes, T.; Grussenmeyer, P. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings. Appl. Sci. 2017, 7, 1030. [CrossRef] 25. Han, H.; Han, X.; Sun, F.; Huang, C. Point cloud simplification with preserved edge based on normal vector. Optik 2015, 126, 2157–2162. [CrossRef] 26. Wang, Y.Q.; Tao, Y.; Zhang, H.J.; Sun, S.S. A simple point cloud data reduction method based on Akima spline interpolation for digital copying manufacture. Int. J. Adv. Manuf. Technol. 2013, 69, 2149–2159. [CrossRef] 27. Arpaia, P.; Buzio, M.; Inglese, V. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements. Meas. Sci. Technol. 2010, 21. [CrossRef] 28. Wang, D.; He, C.; Li, X.; Peng, J. Progressive point set surface compression based on planar reflective symmetry analysis. Comput. Aided Des. 2015, 58, 34–42. [CrossRef] 29. Lee, K.H.; Woo, H.; Suk, T. Data reduction methods for reverse engineering. Int. J. Adv. Manuf. Technol. 2001, 17, 735–743. [CrossRef] 30. Ma, X.; Cripps, R.J. Shape preserving data reduction for 3D surface points. Comput. Aided Des. 2011, 43, 902–909. [CrossRef] 31. Smith, J.; Petrova, G.; Schaefer, S. Progressive encoding and compression of surfaces generated from point cloud data. Comput. Graph. 2012, 36, 341–348. [CrossRef] Appl. Sci. 2018, 8, 2556 18 of 18 32. Morell, V.; Orts, S.; Cazorla, M.; Garcia-Rodriguez, J. Geometric 3D point cloud compression. Pattern Recognit. Lett. 2014, 50, 55–62. [CrossRef] 33. Lu, J.C.; Yang, J.K.; Mu, L.C. Automatic tracing measurement and close data collection system of the free-form surfaces. J. Dalian Univ. Technol. 1986, 24, 55–59. (In Chinese) 34. ElKott, D.F.; Veldhuis, S.C. Isoparametric line sampling for the inspection planning of sculptured surfaces. Comput. Aided Des. 2005, 37, 189–200. [CrossRef] 35. Wozniak, A.; Balazinski, A.; Mayer, R. Application of fuzzy knowledge base for corrected measured point determination in coordinate metrology. In Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society, San Diego, CA, USA, 24–27 June 2007. 36. Jia, Z.Y.; Lu, X.H.; Wang, W.; Yang, J.Y. Data sampling and processing for contact free-form surface scan-tracking measurement. Int. J. Adv. Manuf. Technol. 2010, 46, 237–251. [CrossRef] 37. Tao, Y.; Li, Y.; Wang, Y.Q.; Ma, Y.Y. On-line point cloud data extraction algorithm for spatial scanning measurement of irregular surface in copying manufacture. Int. J. Adv. Manuf. Technol. 2016, 87, 1891–1905. [CrossRef] 38. Li, R.J.; Fan, K.C.; Huang, Q.X.; Zhou, H.; Gong, E.M. A long-stroke 3D contact scanning probe for micro/nano coordinate measuring machine. Precis. Eng. 2016, 43, 220–229. [CrossRef] 39. Wang, Y.Q.; Liu, H.B.; Tao, Y.; Jia, Z.Y. Influence of incident angle on distance detection accuracy of point laser probe with charge-coupled device: Prediction and calibration. Opt. Eng. 2012, 51, 083606. [CrossRef] 40. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vis. Comput. 1998, 16, 99–110. [CrossRef] 41. Carmignato, S. Accuracy of industrial computed tomography measurements: Experimental results from an international comparison. CIRP Ann. Manuf. Technol. 2012, 61, 491–494. [CrossRef] 42. Lamberty, A.; Schimmel, H.; Pauwels, J. The study of the stability of reference materials by isochronous measurements. Anal. Bioanal. Chem. 1998, 360, 359–361. [CrossRef] 43. Li, W.D.; Zhou, H.X.; Hong, W. A Hermite inter/extrapolation scheme for MoM matrices over a frequency band. IEEE Antennas Wirel. Propag. Lett. 2009, 8, 782–785. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Journal

Applied SciencesMultidisciplinary Digital Publishing Institute

Published: Dec 10, 2018

There are no references for this article.