Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Visual error correction of continuous aerobics action images based on graph difference function

Visual error correction of continuous aerobics action images based on graph difference function 1IntroductionWith the development of the society and people's attention to their health, many people intensely focus and demand for better health fitness and sports activity has become a kind of new fashion, including aerobics, which not only improve the physical quality, cardiopulmonary function and muscular endurance, make human body to achieve the optimal state, more importantly, aerobic exercise is different from others. Aerobics movement is easy and beautiful, the difficulty coefficient is not big, regardless of men and women, old and young as long as there is a free place to exercise at anytime and anywhere for achieving fitness, and at the same time it brings to people the enjoyment of art and becomes more and more popular with the masses. It is a fact that more and more people watch aerobic videos. However, as some aerobics videos were shot early and the equipment was too old, the video or GIF people watched was always blurred, which is what we call continuous aerobics action image visual error.The visual error of continuous aerobics action image [1] is caused by the fact that the camera axis is not perpendicular to the plane of the photographed object. Enhancing the vertical accuracy of the camera and the photographed object can reduce the visual error. The nonlinear distortion error is the geometric distortion caused by the optical error of the camera lens. The larger the distance between the image and the axis, the higher the image distortion degree.In recent years, computer intelligent vision technology has developed rapidly, realising continuous aerobics movement image recognition in the computer environment, which has important value in aerobics technical analysis [2]. The traditional method usually corrects the visual errors of moving images through parameter statistical feature reconstruction model and geometric regular contour reconstruction model, and enhances the signal-to-noise ratio of the corrected image [3, 4]. The implementation process of these methods is relatively simple, but the image after correction is easy to appear blurring or jagged effect, as the visual error correction effect is not good. In this paper, a new method is analysed to describe the problem of visual error correction of continuous aerobics action images. The image collection of continuous aerobics action images is completed by holographic projection method, the collected images are processed by fractal coding, and the visual error correction is realised by using the graph difference function. The experimental results show that the proposed method can effectively correct the visual errors and ensure the quality of continuous aerobics motion images.2Steps analysis of visual error correction method for continuous aerobics dynamic image2.1Holographic projection processing of continuous aerobics action imagesIn order to realise the correction of visual errors of continuous aerobics action images, the acquired images were first processed by holographic infrared vision scanning technology [5] and the imaging projection of continuous aerobics action images was realised by image space scanning method, and then the template feature combination equation of three-dimensional imaging of aerobics action images was calculated:(1)I(x,y) = h(x,y)*f(x,y) + δ(x,y)I(x,y) = h(x,y)*f(x,y) + \delta (x,y)In Equation (1), H (x,y) is used to describe the parallax function; * is used to describe convolution; F (x,y) is used to describe the edge contour pixels of continuous aerobics action images; δ(x,y) is used to describe the sub-pixel features of aerobics action images.The parallax function of the image pixel set was registered according to the visual error, and then the sub-pixel template matching of the image was achieved through vector quantisation technology [6], and the pixel output after template matching was obtained:(2)I(x,y)=f(x,y)+δ(x,y)I(x,y) = f(x,y) + \delta (x,y)The estimated pixel value of the edge contour of the continuous aerobics action image is obtained:(3)f∧(x,y)=αF(x,y)+(1−α)A+σ2f \wedge (x,y) = \alpha F(x,y) + (1 - \alpha )A + {\sigma ^2}In Equation (3), F(x,y) is used to describe the statistical features of strong texture set parameters obtained after the holographic infrared vision scanning projection of continuous aerobics action images. α is used to describe the background differential pixel characteristic factor; A describes the background differential pixel distribution texture set; σ2 is used to describe local variances.Holographic projection technology is used to obtain continuous calisthenics motion images, which can provide reliable original images for visual error analysis and correction.2.2Image fractal codingFractal coding processing was carried out on the processed continuous aerobics action images to provide a basis for visual error correction [7]. For the aerobics action image I(x, y) with the size of M × M, the coded R block size is m × m, the number of R blocks required to encode is MR = (M/m)2, the size of D block in the code book Φ is 2 m × 2 m, Φ represents code book capacity [8], Φ = 8[(M − 2 m)/λ + 1]2, where λ is used to describe the code book step size. If the best matching block of each R block is obtained according to the global search method, the whole codebook needs to be traversed, MRD = MRΦ matches are required in the coding process; due to the large size of code book capacity, the computational complexity will be increased and the efficiency will be affected, so the search space needs to be reduced [9]. The image block features are defined next and the full search at matching time is transformed into neighbourhood search by the features [10]. First, the image block U = [UI,j] ∈ Rm × m is reduced to an image block U = [UI,j] ∈ Rm/2 × m/2 by means of the mean value of four neighbourhood pixels, the calculation formula for self-similar features of image blocks is as follows:(4)W(U)=‖V¯−v¯•I‖/‖V−v¯•I‖{\rm{W(U)}} = \left\| {\overline V - \overline v \bullet I} \right\|/\left\| {V - \overline v \bullet I} \right\|In Equation (4), V is used to describe image blocks, V̅ is used to describe a shrinking image block, is used to describe the 2-norm of V̅ without the mean, is used to describe the 2-norm of V without the mean. Under the condition of high similarity between R block and D block, they can form a matching pair.The self-similar eigenvalues are arranged in order from small to large to complete the sorting of block D in the code book, then, in the ascending code book Φ, the binary method is used to search the initial matching block D with the smallest difference from the self-similar eigenvalue of W (R) the coding block R:(5)Dinit = {D∈ΦminW(R)−W(D)}Dinit = \{ D \in \Phi \min W(R) - W(D)\} Coding search is completed in the k neighbourhood of Dinit. For each coded R block, search codebook ΦW can be obtained through Equation (6):(6)ΦW = {D∈Φ,D∈M(Dinit,k)}\Phi W = \{ D \in \Phi ,D \in M(Dinit,k)\} The D block corresponding to the minimum matching error of ΦW and R block is obtained, and the fractal code of R block is obtained, as ΦW capacity is lower than that of bit capacity, it can effectively improve the encoding speed [11].2.3Visual error correctionThere is inevitably a certain error between the original continuous calisthenics action image I and visual image I∧, that is (7)I=I^+EI = \hat I + EIn Equation (7), E is used to describe the error image. The interpolation operator g is applied to both sides of Equation (7), so (8)IK=g(I^+E)=g(I^)+g(e)=I^K+ek{I_K} = g(\hat I + E) = g(\hat I) + g(e) = {\hat I_K} + {e_k}In Equation (8), is used to describe the fractal partial error of local self-similarity, is used to describe the error between and its estimate, at the same time, it can be estimated by Equation (9)(9)e^k=gk(e)≈ek{\hat e_k} = {g_k}(e) \approx {e_k}In Equation (9), gb represents the two-cube interpolation operator. In order to reduce the interpolation error, is added to as compensation, and it is taken as the optimal estimate of.On the basis of the above analysis, the visual error correction process of continuous aerobics action images is given:(1)Establish the covering set {Si} of continuous aerobics action images, each subblock Si has size m0 × m0, Sj is used to describe similar aerobics blocks, Si ∩ Sj = Φ at the same time.(2)All sub-blocks are marked as unencoded and added to the encoding task queue P.(3)Select the unencoded subblock R′ from P, through exhaustive technique, then optimal block D′ is found in all blocks of 2 m × 2 m size in the image to maximise the similarity between it and R′, error ek is minimised.(4)If R′ is smaller than me × me, then the size of R′ and the fractal code Φ are recorded and marked R′ as encoded. Instead, if I divide R into smaller pieces, mark it as bit code, and add it to P and then remove the original subblock R from P.(5)Repeat Step (3) and continue to code the P neutron block until all the sub-blocks in the traversal are traversed.(6)The fractal code {Φ} is obtained through the above process and the continuous aerobics action image is decoded. Meanwhile, the error between the original image and the continuous aerobics action image is obtained, and thus the error image E is obtained and interpolation is performed to obtain the error compensation result êk.(7)ÎK = IK +êk was taken as the final correction result of continuous aerobics motion image.3Experiment of visual error using graph phase difference function3.1Generation of parallax diagramStereoscopic matching error detection and correction [12] can be proved by a three-dimensional image of a pentagon object. The boundary can be obtained by the linear feature extraction system, and the parallax at each boundary point can be obtained by the Intra-Scanline Matching Algorithm. For display, the obtained parallax is interpolated along the polarisation line to generate a dense parallax graph.3.2Detection and correction errorsSince the parallax of the line segments along the line varies linearly and if all the boundaries match correctly, almost all the marked visual impairments will fall on a straight line, as shown in Figure 1. This is not the case, as shown in Figure 2.Fig. 1Ideal parallax and line segment length relationshipFig. 2Relationship between actual parallax and segment lengthIf the line segment can be interpolated with the data corresponding to the correct parallax, we can not only detect and correct the false match, but also fill the parallax information in the unmatched part and the occluded part. Before interpolating the parallax value, the marked point has to be discarded due to an incorrect match. If more lines match the correct line segment than a single wrong line segment, that is, the local error, then thin strips in all directions and positions can be fitted to the marked parallax length data to detect the error. According to the definition of local error, it can be concluded that the thin band with the largest number of points has the parallax value corresponding to the correct match, and the rest are wrong. The use of thin strips instead of straight lines is due to the discrete location of the boundary and the error at the boundary position so that all correctly matched points do not completely fall on a line.By using the improved Hough transform technique, we can not only attach the thin tape to the parallax data, but also correct the parallax of the horizontal line segment which is parallel to the image plane and the non-horizontal line segment.3.3Comparison of phase difference function algorithmsBased on the stereo algorithm of typical boundary points, we use dynamic programming to calculate the best matched boundary points along the polarisation line, that is, the minimum total value with the matched boundary pair. Thus, it is possible to compare the performance of four different matching valuation functions of Cost Functions (C.F.). These functions (except the first one) do not correspond to any standard stereo matching algorithm and have been chosen only to prove such a valuation algorithm.Tables 1 and 2 quantitatively compare the performance of the dynamic programme-based matching algorithms when testing two stereo image pairs using four different matching graph phase difference functions. The calculation methods and steps used in the measurement process are as follows:(1)Total number of line segments selected and use line segments greater than the fixed threshold to complete error detection;The total number of line segments processed: according to the selected line segments, the line segments with more than a fixed number of matching boundary points and the correct matching are used for error statistics and correction to ensure their reliability;(2)Percentage of processed boundary points: refers to the percentage of matched boundary points on the treated line segment to all matched boundary points;(3)Percentage of error: the percentage of mismatched boundary points between the match points where the signifier bits have been processed;Percentage of corrected boundary points: it is either the number of corrected boundary points or the number of filled boundary points according to processing all boundary points on the line segment to make their parallax. We analyse and compare the errors generated by the following four matching valuation algorithms:Graph phase difference function I: Graph phase difference function is:(10)m=12×1k∑i=lkai+1l∑j=llbjm = {1 \over 2} \times {1 \over k}\sum\limits_{i = l}^k {a_i} + {1 \over l}\sum\limits_{j = l}^l {b_j}Where a1, a2..., ak and b1, b2,...,bk is the pixel intensity value to the left of the matching boundary point.(11)β2=12×1k∑i=lk(ai−m)2+1l∑j=ll(bj−m)2{\beta ^2} = {1 \over 2} \times {1 \over k}\sum\limits_{i = l}^k {({a_i} - m)^2} + {1 \over l}\sum\limits_{j = l}^l {({b_j} - m)^2}(12)cost=β2×(k2+l2)12cost = {\beta ^2} \times {({k^2} + {l^2})^{{1 \over 2}}}Graph difference function II: Constraints are imposed on the graph difference function so that the matching only takes place between those boundary points whose orientations are different from each other and greater than 30 degrees. The direction used is the direction of the line segment to which the boundary point belongs rather than the local boundary direction.Graph phase difference function III: This function can be formulated to facilitate matching between boundary points with similar directions and similar interval lengths.(13)cost=(C1e|oe−oi|C2+C3|αe−αi|(α1−α2)2)(αe+αi){\rm{cost}} = ({C_1}e{{\left| {{o_{\rm{e}}} - {o_i}} \right|} \over {{C_2}}} + {C_3}{{\left| {{\alpha _e} - {\alpha _i}} \right|} \over {{{({\alpha _1} - {\alpha _2})}^2}}})({\alpha _e} + {\alpha _i})In the formula, o is the boundary direction, α is the interval length and C is constant.Graph phase difference function IV: This function can use an intensity interval to the left of two boundary points. The value of matching two intervals is the best value of matching two intervals using a region-based stereo algorithm.4Result analysis4.1The visual error correction method of continuous aerobics action image is feasible(1)Converted into video signals by JAI line-by-line scanning CCD camera, and the obtained signals are collected by image acquisition card and converted into digital signals so that they can be processed by PC;(2)The acquired images were processed by holographic infrared vision scanning technology, and the image projection of continuous calisthenics movement images was realised by image space scanning method.(3)The parallax function of image pixel set was registered according to the visual error, and the sub-pixel template matching was achieved by vector quantisation technology.(4)Fractal coding is carried out on the processed continuous aerobics motion images to provide the basis for visual error correction.(5)The continuous calisthenics action image is encoded and decoded, the error between the original image is obtained, and then the error image is obtained; the error compensation results are obtained by interpolation processing, and the visual error correction is completed.4.2Image difference function algorithm is feasible to correct visual errorIn Tables 1 and 2, the performance of four algorithms for image processing of two complicated three-dimensional objects is listed. From the use of these four matching algorithms on the two landscape quantitative calculations, it can be seen that the performance of the graph difference function I and the graph difference function is very similar, and the graph difference function of the sum and the graph difference function of the performance is very different. In order to obtain the stability of the performance test of this evaluation algorithm, various stereo image sum algorithms must be considered. Comparable values of the two different scenes in this paper are listed in the table.Table 1Test images ① Comparison of performance of four graph phase difference functionsThe percentage/%C,F,C,F,C,F,C,F,Percentage to match the boundary88.83488.38259.07378.891Percentage of processing matched boundaries83.85178.29368.90253.074Percentage error3.8523.8338.04131.923Percentage of correction boundary points14.91115.13440.61245.823The minimum percentage error of the line segment30.00127.27350.00461.642Table 2Performance comparison of test images ② four graph phase difference functionsThe percentage/%C,F,C,F,C,F,C,F,Percentage to match the boundary78.52174.91356.95270.534Percentage of processing matched boundaries70.38270.77154.16333.692Percentage error8.0137.52017.01420.451Percentage of correction boundary points14.89416.11341.11242.953The minimum percentage error of the line segment46.88144.44458.82152.1745ConclusionFirst, the JAI camera (Denmark) is used to scan the charged-coupled device camera for video signals, and the image acquisition card is used to collect the acquired signals, and then the signals are transformed into digital signals that are processed by the computer. Finally, the continuous aerobics action images are obtained by holographic projection. According to the visual error, the parallax function of the image pixel set was registered, the sub-pixel template matching of the image was realised by vector quantisation technology, and the processed continuous aerobics action image was processed by fractal coding, which provided the basis for the visual error correction.Continuous visual error correction of aerobics image is not only important to improve the clarity of aerobics image, it is an important premise to improve aerobics movement as well as has a positive role in promoting the development of aerobics cause, and the application of this method to improve the clarity of other images is also feasible. It has promoted the technological upgradation and sustainable development of other image industries. And on the proposed use figure differ function experiment and the formula of the algorithm has proved to use map function difference to quantitative processing and analysing a large number of matched boundary element, and along the boundary contour under the constraint of parallax continuity, detection of partial and overall errors, and then to correction, at the same time, it cannot match the boundary of the element to fill the parallax. This algorithm is very important for surface interpolation, surface reconstruction and 3D object recognition in vision, and it also provides an accurate and novel method for the research of visual matching algorithm and the evaluation of algorithm performance. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Nonlinear Sciences de Gruyter

Visual error correction of continuous aerobics action images based on graph difference function

Loading next page...
 
/lp/de-gruyter/visual-error-correction-of-continuous-aerobics-action-images-based-on-ikdLFfFxsB

References (5)

Publisher
de Gruyter
Copyright
© 2021 Dali Yin et al., published by Sciendo
eISSN
2444-8656
DOI
10.2478/amns.2021.1.00071
Publisher site
See Article on Publisher Site

Abstract

1IntroductionWith the development of the society and people's attention to their health, many people intensely focus and demand for better health fitness and sports activity has become a kind of new fashion, including aerobics, which not only improve the physical quality, cardiopulmonary function and muscular endurance, make human body to achieve the optimal state, more importantly, aerobic exercise is different from others. Aerobics movement is easy and beautiful, the difficulty coefficient is not big, regardless of men and women, old and young as long as there is a free place to exercise at anytime and anywhere for achieving fitness, and at the same time it brings to people the enjoyment of art and becomes more and more popular with the masses. It is a fact that more and more people watch aerobic videos. However, as some aerobics videos were shot early and the equipment was too old, the video or GIF people watched was always blurred, which is what we call continuous aerobics action image visual error.The visual error of continuous aerobics action image [1] is caused by the fact that the camera axis is not perpendicular to the plane of the photographed object. Enhancing the vertical accuracy of the camera and the photographed object can reduce the visual error. The nonlinear distortion error is the geometric distortion caused by the optical error of the camera lens. The larger the distance between the image and the axis, the higher the image distortion degree.In recent years, computer intelligent vision technology has developed rapidly, realising continuous aerobics movement image recognition in the computer environment, which has important value in aerobics technical analysis [2]. The traditional method usually corrects the visual errors of moving images through parameter statistical feature reconstruction model and geometric regular contour reconstruction model, and enhances the signal-to-noise ratio of the corrected image [3, 4]. The implementation process of these methods is relatively simple, but the image after correction is easy to appear blurring or jagged effect, as the visual error correction effect is not good. In this paper, a new method is analysed to describe the problem of visual error correction of continuous aerobics action images. The image collection of continuous aerobics action images is completed by holographic projection method, the collected images are processed by fractal coding, and the visual error correction is realised by using the graph difference function. The experimental results show that the proposed method can effectively correct the visual errors and ensure the quality of continuous aerobics motion images.2Steps analysis of visual error correction method for continuous aerobics dynamic image2.1Holographic projection processing of continuous aerobics action imagesIn order to realise the correction of visual errors of continuous aerobics action images, the acquired images were first processed by holographic infrared vision scanning technology [5] and the imaging projection of continuous aerobics action images was realised by image space scanning method, and then the template feature combination equation of three-dimensional imaging of aerobics action images was calculated:(1)I(x,y) = h(x,y)*f(x,y) + δ(x,y)I(x,y) = h(x,y)*f(x,y) + \delta (x,y)In Equation (1), H (x,y) is used to describe the parallax function; * is used to describe convolution; F (x,y) is used to describe the edge contour pixels of continuous aerobics action images; δ(x,y) is used to describe the sub-pixel features of aerobics action images.The parallax function of the image pixel set was registered according to the visual error, and then the sub-pixel template matching of the image was achieved through vector quantisation technology [6], and the pixel output after template matching was obtained:(2)I(x,y)=f(x,y)+δ(x,y)I(x,y) = f(x,y) + \delta (x,y)The estimated pixel value of the edge contour of the continuous aerobics action image is obtained:(3)f∧(x,y)=αF(x,y)+(1−α)A+σ2f \wedge (x,y) = \alpha F(x,y) + (1 - \alpha )A + {\sigma ^2}In Equation (3), F(x,y) is used to describe the statistical features of strong texture set parameters obtained after the holographic infrared vision scanning projection of continuous aerobics action images. α is used to describe the background differential pixel characteristic factor; A describes the background differential pixel distribution texture set; σ2 is used to describe local variances.Holographic projection technology is used to obtain continuous calisthenics motion images, which can provide reliable original images for visual error analysis and correction.2.2Image fractal codingFractal coding processing was carried out on the processed continuous aerobics action images to provide a basis for visual error correction [7]. For the aerobics action image I(x, y) with the size of M × M, the coded R block size is m × m, the number of R blocks required to encode is MR = (M/m)2, the size of D block in the code book Φ is 2 m × 2 m, Φ represents code book capacity [8], Φ = 8[(M − 2 m)/λ + 1]2, where λ is used to describe the code book step size. If the best matching block of each R block is obtained according to the global search method, the whole codebook needs to be traversed, MRD = MRΦ matches are required in the coding process; due to the large size of code book capacity, the computational complexity will be increased and the efficiency will be affected, so the search space needs to be reduced [9]. The image block features are defined next and the full search at matching time is transformed into neighbourhood search by the features [10]. First, the image block U = [UI,j] ∈ Rm × m is reduced to an image block U = [UI,j] ∈ Rm/2 × m/2 by means of the mean value of four neighbourhood pixels, the calculation formula for self-similar features of image blocks is as follows:(4)W(U)=‖V¯−v¯•I‖/‖V−v¯•I‖{\rm{W(U)}} = \left\| {\overline V - \overline v \bullet I} \right\|/\left\| {V - \overline v \bullet I} \right\|In Equation (4), V is used to describe image blocks, V̅ is used to describe a shrinking image block, is used to describe the 2-norm of V̅ without the mean, is used to describe the 2-norm of V without the mean. Under the condition of high similarity between R block and D block, they can form a matching pair.The self-similar eigenvalues are arranged in order from small to large to complete the sorting of block D in the code book, then, in the ascending code book Φ, the binary method is used to search the initial matching block D with the smallest difference from the self-similar eigenvalue of W (R) the coding block R:(5)Dinit = {D∈ΦminW(R)−W(D)}Dinit = \{ D \in \Phi \min W(R) - W(D)\} Coding search is completed in the k neighbourhood of Dinit. For each coded R block, search codebook ΦW can be obtained through Equation (6):(6)ΦW = {D∈Φ,D∈M(Dinit,k)}\Phi W = \{ D \in \Phi ,D \in M(Dinit,k)\} The D block corresponding to the minimum matching error of ΦW and R block is obtained, and the fractal code of R block is obtained, as ΦW capacity is lower than that of bit capacity, it can effectively improve the encoding speed [11].2.3Visual error correctionThere is inevitably a certain error between the original continuous calisthenics action image I and visual image I∧, that is (7)I=I^+EI = \hat I + EIn Equation (7), E is used to describe the error image. The interpolation operator g is applied to both sides of Equation (7), so (8)IK=g(I^+E)=g(I^)+g(e)=I^K+ek{I_K} = g(\hat I + E) = g(\hat I) + g(e) = {\hat I_K} + {e_k}In Equation (8), is used to describe the fractal partial error of local self-similarity, is used to describe the error between and its estimate, at the same time, it can be estimated by Equation (9)(9)e^k=gk(e)≈ek{\hat e_k} = {g_k}(e) \approx {e_k}In Equation (9), gb represents the two-cube interpolation operator. In order to reduce the interpolation error, is added to as compensation, and it is taken as the optimal estimate of.On the basis of the above analysis, the visual error correction process of continuous aerobics action images is given:(1)Establish the covering set {Si} of continuous aerobics action images, each subblock Si has size m0 × m0, Sj is used to describe similar aerobics blocks, Si ∩ Sj = Φ at the same time.(2)All sub-blocks are marked as unencoded and added to the encoding task queue P.(3)Select the unencoded subblock R′ from P, through exhaustive technique, then optimal block D′ is found in all blocks of 2 m × 2 m size in the image to maximise the similarity between it and R′, error ek is minimised.(4)If R′ is smaller than me × me, then the size of R′ and the fractal code Φ are recorded and marked R′ as encoded. Instead, if I divide R into smaller pieces, mark it as bit code, and add it to P and then remove the original subblock R from P.(5)Repeat Step (3) and continue to code the P neutron block until all the sub-blocks in the traversal are traversed.(6)The fractal code {Φ} is obtained through the above process and the continuous aerobics action image is decoded. Meanwhile, the error between the original image and the continuous aerobics action image is obtained, and thus the error image E is obtained and interpolation is performed to obtain the error compensation result êk.(7)ÎK = IK +êk was taken as the final correction result of continuous aerobics motion image.3Experiment of visual error using graph phase difference function3.1Generation of parallax diagramStereoscopic matching error detection and correction [12] can be proved by a three-dimensional image of a pentagon object. The boundary can be obtained by the linear feature extraction system, and the parallax at each boundary point can be obtained by the Intra-Scanline Matching Algorithm. For display, the obtained parallax is interpolated along the polarisation line to generate a dense parallax graph.3.2Detection and correction errorsSince the parallax of the line segments along the line varies linearly and if all the boundaries match correctly, almost all the marked visual impairments will fall on a straight line, as shown in Figure 1. This is not the case, as shown in Figure 2.Fig. 1Ideal parallax and line segment length relationshipFig. 2Relationship between actual parallax and segment lengthIf the line segment can be interpolated with the data corresponding to the correct parallax, we can not only detect and correct the false match, but also fill the parallax information in the unmatched part and the occluded part. Before interpolating the parallax value, the marked point has to be discarded due to an incorrect match. If more lines match the correct line segment than a single wrong line segment, that is, the local error, then thin strips in all directions and positions can be fitted to the marked parallax length data to detect the error. According to the definition of local error, it can be concluded that the thin band with the largest number of points has the parallax value corresponding to the correct match, and the rest are wrong. The use of thin strips instead of straight lines is due to the discrete location of the boundary and the error at the boundary position so that all correctly matched points do not completely fall on a line.By using the improved Hough transform technique, we can not only attach the thin tape to the parallax data, but also correct the parallax of the horizontal line segment which is parallel to the image plane and the non-horizontal line segment.3.3Comparison of phase difference function algorithmsBased on the stereo algorithm of typical boundary points, we use dynamic programming to calculate the best matched boundary points along the polarisation line, that is, the minimum total value with the matched boundary pair. Thus, it is possible to compare the performance of four different matching valuation functions of Cost Functions (C.F.). These functions (except the first one) do not correspond to any standard stereo matching algorithm and have been chosen only to prove such a valuation algorithm.Tables 1 and 2 quantitatively compare the performance of the dynamic programme-based matching algorithms when testing two stereo image pairs using four different matching graph phase difference functions. The calculation methods and steps used in the measurement process are as follows:(1)Total number of line segments selected and use line segments greater than the fixed threshold to complete error detection;The total number of line segments processed: according to the selected line segments, the line segments with more than a fixed number of matching boundary points and the correct matching are used for error statistics and correction to ensure their reliability;(2)Percentage of processed boundary points: refers to the percentage of matched boundary points on the treated line segment to all matched boundary points;(3)Percentage of error: the percentage of mismatched boundary points between the match points where the signifier bits have been processed;Percentage of corrected boundary points: it is either the number of corrected boundary points or the number of filled boundary points according to processing all boundary points on the line segment to make their parallax. We analyse and compare the errors generated by the following four matching valuation algorithms:Graph phase difference function I: Graph phase difference function is:(10)m=12×1k∑i=lkai+1l∑j=llbjm = {1 \over 2} \times {1 \over k}\sum\limits_{i = l}^k {a_i} + {1 \over l}\sum\limits_{j = l}^l {b_j}Where a1, a2..., ak and b1, b2,...,bk is the pixel intensity value to the left of the matching boundary point.(11)β2=12×1k∑i=lk(ai−m)2+1l∑j=ll(bj−m)2{\beta ^2} = {1 \over 2} \times {1 \over k}\sum\limits_{i = l}^k {({a_i} - m)^2} + {1 \over l}\sum\limits_{j = l}^l {({b_j} - m)^2}(12)cost=β2×(k2+l2)12cost = {\beta ^2} \times {({k^2} + {l^2})^{{1 \over 2}}}Graph difference function II: Constraints are imposed on the graph difference function so that the matching only takes place between those boundary points whose orientations are different from each other and greater than 30 degrees. The direction used is the direction of the line segment to which the boundary point belongs rather than the local boundary direction.Graph phase difference function III: This function can be formulated to facilitate matching between boundary points with similar directions and similar interval lengths.(13)cost=(C1e|oe−oi|C2+C3|αe−αi|(α1−α2)2)(αe+αi){\rm{cost}} = ({C_1}e{{\left| {{o_{\rm{e}}} - {o_i}} \right|} \over {{C_2}}} + {C_3}{{\left| {{\alpha _e} - {\alpha _i}} \right|} \over {{{({\alpha _1} - {\alpha _2})}^2}}})({\alpha _e} + {\alpha _i})In the formula, o is the boundary direction, α is the interval length and C is constant.Graph phase difference function IV: This function can use an intensity interval to the left of two boundary points. The value of matching two intervals is the best value of matching two intervals using a region-based stereo algorithm.4Result analysis4.1The visual error correction method of continuous aerobics action image is feasible(1)Converted into video signals by JAI line-by-line scanning CCD camera, and the obtained signals are collected by image acquisition card and converted into digital signals so that they can be processed by PC;(2)The acquired images were processed by holographic infrared vision scanning technology, and the image projection of continuous calisthenics movement images was realised by image space scanning method.(3)The parallax function of image pixel set was registered according to the visual error, and the sub-pixel template matching was achieved by vector quantisation technology.(4)Fractal coding is carried out on the processed continuous aerobics motion images to provide the basis for visual error correction.(5)The continuous calisthenics action image is encoded and decoded, the error between the original image is obtained, and then the error image is obtained; the error compensation results are obtained by interpolation processing, and the visual error correction is completed.4.2Image difference function algorithm is feasible to correct visual errorIn Tables 1 and 2, the performance of four algorithms for image processing of two complicated three-dimensional objects is listed. From the use of these four matching algorithms on the two landscape quantitative calculations, it can be seen that the performance of the graph difference function I and the graph difference function is very similar, and the graph difference function of the sum and the graph difference function of the performance is very different. In order to obtain the stability of the performance test of this evaluation algorithm, various stereo image sum algorithms must be considered. Comparable values of the two different scenes in this paper are listed in the table.Table 1Test images ① Comparison of performance of four graph phase difference functionsThe percentage/%C,F,C,F,C,F,C,F,Percentage to match the boundary88.83488.38259.07378.891Percentage of processing matched boundaries83.85178.29368.90253.074Percentage error3.8523.8338.04131.923Percentage of correction boundary points14.91115.13440.61245.823The minimum percentage error of the line segment30.00127.27350.00461.642Table 2Performance comparison of test images ② four graph phase difference functionsThe percentage/%C,F,C,F,C,F,C,F,Percentage to match the boundary78.52174.91356.95270.534Percentage of processing matched boundaries70.38270.77154.16333.692Percentage error8.0137.52017.01420.451Percentage of correction boundary points14.89416.11341.11242.953The minimum percentage error of the line segment46.88144.44458.82152.1745ConclusionFirst, the JAI camera (Denmark) is used to scan the charged-coupled device camera for video signals, and the image acquisition card is used to collect the acquired signals, and then the signals are transformed into digital signals that are processed by the computer. Finally, the continuous aerobics action images are obtained by holographic projection. According to the visual error, the parallax function of the image pixel set was registered, the sub-pixel template matching of the image was realised by vector quantisation technology, and the processed continuous aerobics action image was processed by fractal coding, which provided the basis for the visual error correction.Continuous visual error correction of aerobics image is not only important to improve the clarity of aerobics image, it is an important premise to improve aerobics movement as well as has a positive role in promoting the development of aerobics cause, and the application of this method to improve the clarity of other images is also feasible. It has promoted the technological upgradation and sustainable development of other image industries. And on the proposed use figure differ function experiment and the formula of the algorithm has proved to use map function difference to quantitative processing and analysing a large number of matched boundary element, and along the boundary contour under the constraint of parallax continuity, detection of partial and overall errors, and then to correction, at the same time, it cannot match the boundary of the element to fill the parallax. This algorithm is very important for surface interpolation, surface reconstruction and 3D object recognition in vision, and it also provides an accurate and novel method for the research of visual matching algorithm and the evaluation of algorithm performance.

Journal

Applied Mathematics and Nonlinear Sciencesde Gruyter

Published: Jan 1, 2022

Keywords: Continuity; Aerobics; Action image; Visual error; Correction; Function algorithm; 39B99

There are no references for this article.