Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Realization of Internet of vehicles technology integrated into an augmented reality system:

Realization of Internet of vehicles technology integrated into an augmented reality system: The goal of this study is to develop an internet of vehicles system with augmented reality technology. The system deals mainly with three subjects, namely, lane departure warning, forward collision detection and warning, and internet of vehicles. First, to deal with the subject of lane departure warning, the Hough transform is used in this study to extract the possible positions of lane lines from the region of interest of an image. The Kalman filter is further employed to remove noises and estimate the actual positions of car lane lines. The lane departure decision is then used to determine whether a lane departure situation occurs. Second, the Sobel edge detector and taillight detection method are used to locate the hypothetical region of the vehicle. The characteristic parameters within the hypothetical region can also be obtained through the Harris corner detection method. To verify the hypothetical region and identify the vehicle, the support vector machine algorithm is used. The collision decision is then applied to determine whether the distance between two vehicles is short, thus fulfilling the goal of forward collision detection and warning. In addition, a secure and easy-to-use internet of vehicles is achieved with the use of the Rivest–Shamir–Adleman encryption algorithm, which uses public and secret keys to encrypt and decrypt messages to achieve the task of user identification. Upon obtaining control of the vehicle, the driver has full access to the most up-to-date information provided by the driver assistance system. Finally, internet of vehicles applications incorporating the previously mentioned methods, smart glasses, and augmented reality are implemented in this study. Smart glasses provide the drivers easy access to information about the vehicle and warnings, which helps enhance driver convenience and safety considerably. Keywords Augmented reality, Internet of vehicles, Hough transform, Kalman filter, Sobel edge detector, Harris corner detector, support vector machine, Rivest–Shamir–Adleman algorithm Introduction In recent years, industrial giants, such as Facebook, Microsoft, Samsung, and Google, have launched a new revolution in visual technology. In 2016, the major bright spot is the expansion of the augmented and virtual realities of science and technology. The numbers of participants using devices and the developers of software in this revolution are also increasing. Augmented reality (AR) is the combination of computer and real-world information; thus, users can obtain relevant information at the right place and time. Virtual reality (VR) is the ideal artificial environment on a computer wherein a virtual environment is created in a seemingly real or physical manner. In the VR, the operator can interact with the controller in the virtual environment. Meanwhile, in the Department of Electrical Engineering, National Chin-Yi University of Technology, Taiping Dist, Taichung, Taiwan Corresponding author: Pi-Yun Chen, Department of Electrical Engineering, National Chin-Yi University of Technology, No. 57, Sec 2, Zhongshan Rd, Taiping Dist, Taichung 41170, Taiwan. Email: chenby@ncut.edu.tw Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http://www. creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Pai et al. 705 Figure 1. Advanced driver assistance system. AR, the operator can interact with the real environment and expand information. Until now, AR and VR still lack the immersive feeling, but developers are not bothered by this as the purpose of AR and VR is to provide information. The intelligent transportation system (ITS) coordinates and integrates advanced electronic, computer, com- munication, and control technologies into the transportation system. The reliability and characteristics of the system have been tested in various areas at different levels, enhancing its safety and efficiency in instantly and neatly solving the problems often encountered in traffic. The ITS can be subdivided into five directions, namely, advanced traffic management system, advanced traveler information system, advanced vehicle control and safety system (AVCSS), advance public transportation system, and commercial vehicle operation. This study takes the AVCSS as an example, where the system covers the lane departure warning system and front-mounted detector of the anti-collision warning system, which is called the advanced driver assistance system (ADAS), as shown in Figure 1. The lane departure warning is an important function of the ADAS. This study uses a camera to capture the driving image and sets the image region of interest (ROI). The edge detection method determines the ROI and edge characteristics or uses scan lines to scan the lane line characteristics within the set ROI range to avoid driving in an offset lane situation under unconscious conditions. The lane line can be divided into two types, namely, 2,3 4 straight line and curve, indicating that the lane line can use the left and right lane information to determine whether the vehicle may or may not offset to issue a warning. For the front vehicle detection system, a previous study collected and classified the vehicle detection system based on image processing over the years and proposed the framework for the vehicle detection system. Since then, many studies have been conducted on the basis of this framework. Front vehicle detection can be divided into two phases, namely, hypothesis generation (HG) and hypothesis verification (HV). HG is used to identify the possible target vehicle, whereas HV is used to verify the correctness of the target. Two methods summarized in the literature, namely, template-based and appearance-based prediction methods, are also used. The template-based 6,7 prediction method is used for vehicles with obvious characteristics, such as the clear horizontal and vertical edges of the vehicle, the shadow at the bottom of the vehicle, and the U-shaped features formed between bottom and rear of the vehicle. This method has the advantages of quick detection of the target and low computation amount; however, it is susceptible to interference from surrounding environment and obstacles. The appearance- 8–10 based prediction method involves learning through a machine to determine whether the front vehicle is in hypothetical area. This method trains a classifier with a large number of training materials. The training materials contain positive samples for vehicle images and negative samples for non-vehicle images. In recent years, as the Internet of things (IoT) has just started to develop its communication transmission security and identity authentication, it has become an easy target for hacking because of its simplicity as compared with the ordinary computer. Therefore, to prevent this situation, this study also utilized the Rivest–Shamir– Adleman (RSA) encryption algorithm. In this application, before the owner gets into his/her car and receives 706 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Smart Glasses Side Start Vehicle Control Side Scan the Encrypted Decrypted the Instruction Instruction If decrypted If decrypted N successfully successfully? Identify Error Send the Obtain the Encrypted Control Access Instruction Log in Server Obtain the Lane Departure Forward Collision Sensor Data Warning System Warning System End Figure 2. System architecture. vehicle information, he/she needs to scan the vehicle identification (ID). Then, the RSA encryption and decryption methods will pair the control command. After successful matching, the owner can obtain full authority to control the vehicle. System structure The system architecture can be divided into three main parts, namely, lane detection collision prevention warning, front vehicle detection anti-collision warning, and onboard Internet of Vehicles (IoV). Figure 2 shows the flow- chart of the use of smart glasses in the onboard IoV system to scan the vehicle’s ID and identify the owner. When the owner has been identified, the owner can log in to the server, use different vehicle functions, such as lane departure detection warning system and front vehicle detection anti-collision warning system, and access the current vehicle’s information. Lane departure detection warning In the beginning of this study, the range of the search route, which is either the full screen or the local picture of interest, is set and the information on the color space in the ROI is transformed into HSV from RGB through grayscale image processing. After grayscale image processing, the edge detection method, which can reduce the computational complexity, is used to determine the characteristics of the lane line. However, in this study, after determining the characteristics of the lane line, Hough transform is used to identify the starting point of the left Pai et al. 707 Region of Image Pre- Edge Image Input Interest processing Detection Hough Kalman Departure Warning Transform Filter Decision Figure 3. Lane departure warning flow chart. Figure 4. The detection results of left and right lane edge. and right lane lines. After recording these two points, the information of these coordinates is inputted into the 12–14 Kalman filter to conduct the test to determine the actual lane line position. Finally, the lane departure decision is used in this study to determine whether the vehicle is offset or not. The lane line offset warning process is shown in Figure 3. Region of interest and conversion of color space The purpose of defining the image region of interest (ROI) is to reduce the amount of system operations and improve the efficiency of lane detection. In this paper, the horizon in the input image is used as the upper boundary of the ROI because the area above the horizon is the sky which represents the non-road region. In reality, the vehicles will not appear here. The lower boundary is measured for about five meters away from the vehicle in front. In addition to traffic congestion, under the normal driving condition, the vehicle in front will not appear in this area. At present, most of the images are using RGB color space. It is easily affected by the light source when the input image is RGB color space. Therefore, this paper has needed to convert RGB images to HSV color space when processing the images. In HSV color space, H (Hue) represents hue, S (Saturation) represents saturation, and V (Value) represents brightness. The HSV has separated the hue, saturation, and brightness which are the three information of the image. So, when processing the images, it will have better performance when there is interference with background noise. Any pixel in the RGB image corresponds to the Hp value of HSV [15], and its formula is as follows: (1): 8 9 < ½ ðÞ ðÞ = 0:5 RG þ RB H ¼ cos qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1) : ; ðÞ ðÞðÞ RG þ RB GB H if BG H ¼ (2) 360H ; otherwise p 708 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Vertical and Hypothesis Hypothesis Horizontal Image Input Generation Verification Edge Detection Collision Warning Decision N Y Figure 5. Forward collision warning flow chart. where R,G, and B represents the RGB values of the pixels, and the actual H value can be adjusted by comparing the size of B and G using equation (2). When B is less than or equal to G, H is equal to the Hp value; otherwise, when B is greater than G, H value is equal to 360 minus the Hp value of equation (3). MaxðÞ R; G; B MinðÞ R; G; B S ¼ (3) MaxðÞ R; G; B MaxðÞ R; G; B V ¼ (4) Then, the values of S and V in HSV are calculated by using equations (3) and (4), where Max is the maximum value of the three values of R, G, and B, while Min is the minimum value of the three values of R, G, and B. After obtaining the H, S, V values of the HSV color space, this paper used V (Brightness), which is the gray scale image, to perform the subsequent image processing. Thus, it can reduce the calculation of the image dimension and also improve the efficiency of the system operation. Edge detection In the general highway driving screen, the difference between left and right sides of the lane line in the image is approximately 25 . Nevertheless, this study applies the oblique edge detection operator at 45 and 135 to conduct the edge detection operation on the left and right of the image ROI. The edge detection operator is shown in Figure 4. Several advantages of the oblique edge operator are that the structure is simple and the two pixels in the mask can be subtracted to obtain the edge point, which saves time for image calculation. Figure 4(a) shows the detection result of the right lane line using the 45 edge operator, whereas Figure 4(b) shows the detection result of the left lane line using the 135 edge operator. Lane departure decision In the lane departure decision, this study ruled out the situation in which a signal light is used. When the vehicle is not going in a straight direction while on the road but continues to shift slowly toward the left or right, this action is considered an occurrence of lane departure. This situation will affect not only the rear vehicle’s driver but also the driver him/herself and even endanger life. In this study, the departure decision is expressed in equation (1), where L is the starting point position of the left lane line, R is the starting point position of the right lane line, and width is the image width. width > true; if L > Lane Warning ¼ (5) width if R < 2 > 3 false; otherwise Pai et al. 709 Front vehicle detection and crash warning When the front vehicle is detected, the result of the vehicle traffic image taken in the third chapter and the detection of the lane line are taken as the inputs of vehicle crash warning. First, the ROI of the vehicle traffic image is detected vertically and horizontally and HG is conducted. Then, the support vector machine (SVM) is used to verify the HG part, so that the system automatically determines whether there is a vehicle in front or not. This step is called the HV. After detecting the vehicle in front, the collision prevention decision is made to determine whether there is a situation in which the vehicle is close. If such a situation occurs, then the system will issue a warning to remind the driver to maintain the appropriate driving distance. The front vehicle collision warning flowchart is presented in Figure 5. Vanishing point detection and lane masks In this paper, the lane line position coordinate and the slope of the straight line can be obtained through the calculation method presented in the previous chapter, i.e., Hough transform combined with the Kalman filter. Then, equation (6) is used to identify the intersection point of the two lines of infinite extension, which is also known as the vanishing point I m  I m l r r l V ¼ (6) I  I l r where I and m represent the intercept and slope of the left lane, respectively, I and m represent the intercept and l l r r slope of the left lane, respectively, and V is the Y coordinate of the intersection of two lanes. The lane mask is established to identify the two lane line positions and vanishing point to form a triangular region, which is searched by scanning from the center of the image to the left and right sides. In this study, the lane pixel is set to 255 for the edge that has not yet been scanned in the ROI. The remaining lane pixel is set to 0 after the lane edge is scanned, which will be subsequently used to create a triangular lane mask. Day foreground target capture As the vehicle is a regular-shaped object, it has a strong edge characteristic when driving on the road. This study uses this feature as a basis for the detection of the front vehicle and location. In general, the rear of the vehicle has obvious horizontal and vertical characteristics. Thus, this study first uses the Sobel edge detection operator to obtain the gray contour image of the target contour. Then, Otsu’s threshold method (also called Otsu’s method) is used to separate the target contour from the background to obtain the foreground goal. Sobel edge detection is used to enhance the horizontal and vertical characteristics, as expressed in equation (7) N N a b X X Sx; y ¼ wðÞ i; j PxðÞ þ i; y þ j (7) ðÞ s i¼N j¼N a b where Sx; y is the image of the Sobel edge detection output, P is the image of the size f  f , w ði; jÞ is the filter ðÞ l w s mask of the size m  n, and the center position of the mask moves with the x- and y-axes of the image. Otsu’s method is an adaptive binary method. Although this method takes more time to calculate and generate statistics, this method has good adaptability. The appropriate threshold can be obtained on the basis of the changes in the brightness and image background. The aim is to separate the background from the foreground of the image to achieve the goal of our interest. Histogram statistical analysis is employed to assign the pixel with a value greater than the threshold as the foreground and the pixel with a value lesser than the threshold as the background. Then, the smallest variance of the foreground and background of the two groups is calculated and the two values are added to obtain the best threshold. Pavement noise filtering In this study, to eliminate the pavement noise, first, we observe the changes in the gray value of the pavement. After sampling the road image, we calculate the average grayscale value and use it as the threshold value of the binary image. Finally, we expand the binary image to highlight the high luminance region. The expansion 710 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 6. The result of lane noise is removed by Subtraction operation. Figure 7. Using AND operation to obtain the forward vehicle edge contour. operation used in this study is a method widely employed in morphology, which can remove noise from the image, reach the connection gap, and repair the function of the broken image. The purpose of using the expansion operation in capturing the daytime foreground target is not to connect or repair the broken image but to expand the high brightness area to achieve the purpose of removing pavement noise. Pai et al. 711 Figure 8. Fetch foreground target at night. Figure 9. Scan the vehicle’s vertical edges. Finally, in this study, the images obtained using Otsu’s method and the expansion operation are subjected to conditional image subtraction processing, as presented in equation (8). Hence, the driver will be able to isolate the vehicle from the noise while on the road, thereby reducing the interference of the road area and ensuring that the obvious edge of the vehicle is easily detected. Figure 6 shows the results of the conditional subtraction operation of two images. As shown in the figure, the subtraction operation nearly filtered out the noise on the pavement and only left the features of contours of the vehicle tail. if O x; y ¼¼ 255 && D x; y ¼¼ 0 ðÞ ðÞ NRI x; y ¼¼ 255 ðÞ (8) Else NRI x; y ¼¼ 0 ðÞ 712 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Finally, this study uses the AND operation for the lane mask image to remove the pavement noise and obtain the contours of the front vehicle in the lane, as shown in Figure 7. Night foreground target capture When the vehicle is traveling at night, the vehicle edge characteristics cannot be detected by the edge detection methods because the light is not as adequate as in the daytime. However, the most obvious vehicle characteristic at night is the rear light of the vehicle. Therefore, this study uses the method that can detect the rear light of the vehicle to capture the foreground target. First of all, the method previously mentioned in Region of interest and conversion of color space section is used to establish the ROI image and change its RGB color space to the HSV color space. Then, the red rear lights and the rear light produced by the red halo are used to easily identify the characteristics and set the values for the three HSV components, so that only the red target is detected. Subsequently, the expansion and erosion treatments are applied to connect the broken foreground target and remove unnecessary noise. In this manner, the red light block can be detected. Finally, the detected car light’s block and the lane line generated by the lane masks are used to conduct the AND operation. The detection results of the lights are shown in Figure 8. Extraction of vehicle edge feature In this study, the edge of the image is scanned using the edge point scanning approach. The scanned vertical edge point of the output image is shown in Figures 7 and 8. The scanning direction is from the center of the image to the left and right sides of the x-axis, as shown in Figure 9. First, a vertical array with an initial value of 0 is set and the red dot is assumed to be the vertical edge feature of the vehicle. When scanning toward the vertical edge feature, the initial vertical array value will change from 0 to 1. When the number of vertical arrays reaches a certain percentage, the value will change from 0 to 1. Then, the x-axis coordinates will be considered the edge of the vehicle. However, the effect is unsatisfactory when scanning the edge because of the environmental factor interference. Therefore, this study determined whether the width of the left and right sides of the vehicle is within the reasonable range or not. In this study, the national road lane is used as an example. The width of the vehicle is approximately 0.45–0.75 times the width of the lane. Thus, if the left and right vehicle edge points do not meet the limit, then no hypothetical judgment can be made. The reasonable width of the vehicle is determined using equations (9) and (10) C ¼ E  E (9) width right left 0:45W < C < 0:75W (10) lane width lane where E is the x-axis coordinate value of the right edge of the vehicle, E is the x-axis coordinate value of the right left left edge of the vehicle, W is the lane width, and C is the edge width of the bottom of the vehicle obtained by lane width subtracting the left and right vehicle edge coordinates. In general, the aspect ratio of the small car is 1:0.8, so that the width of the upper edge of the vehicle can be obtained by multiplying 0.8 of the edge width of the bottom of the vehicle. After obtaining the vehicle image region, the HV region can be achieved. Hypothesis verification (HV) 19–24 Support vector machines (SVM). The main idea of the SVM is to identify the best hyperplane on the basis of the eigenvalues of the classification in a data set composed of different categories. In this case, different data can be sorted and the distance between data edge hyperplanes can be maximized. The SVM can be divided into three categories, namely, linear separable, linear inseparable, and nonlinear. The linear separable SVM is analyzed for a linear system. For the linear inseparable and nonlinear SVM, the linear inseparable or nonlinear samples in the low dimensional space are raised to the high dimensional space to make it linearly separable, which allows the nonlinear system to be analyzed in the same manner as the linear system. This study uses the SVM as a classifier. After the SVM learns through machine learning, it will have the capability to identify the input and determine whether there is a vehicle at the hypothetical area or not. This study uses the radial basis function, which is also most commonly used in the SVM. Pai et al. 713 Front vehicle collision decision. In this study, the use of the hazardous area facilitates the general user to operate on a collision warning basis. According to the result of front vehicle detection in the previous section, when close to the front vehicle, the bottom of the vehicle will be close to the bottom of the image below the screen. By contrast, when the distance of the front vehicle is far from the bottom of the screen, the position of the bottom edge is close. Therefore, through this feature, a determinant (11) has been established. height  V true; if car > þ V y y Car Warning ¼ (11) false; otherwise The system will issue a warning when the bottom position of the current vehicle is in a hazardous area, where car is used to determine the front position of the y-axis coordinates, height denotes the image height, and V y y denotes the left and right lane line intersection y-axis coordinates. Internet of things information security RSA The RSA cryptographic algorithm is a public key encryption system published by Professors Rivest, Shamir, and Adleman from the Massachusetts Institute of Technology in 1978, whose system is based on factorization as a basis for the design of a set of encryption systems. The RSA encryption algorithm for the main encryption and decryption is expressed in equations (12) and (13) C ¼ MðÞ mod N (12) ðÞ M ¼ C mod N (13) where M is the data to be transmitted or the control instruction and C is the encrypted control instruction. However, before conducting the encryption operation, the public and private keys must be generated, as expressed in equations (14)–(17): 1. Arbitrarily selected prime numbers N , N , and calculate the N value p1 p2 multi N ¼ N  N and N 6¼ N (14) multi p1 p2 p1 p2 where N is the product of two numbers, and N is not equal to N multi p1 p2 2. Calculate Euler’s Totient uðÞ N ¼ N  1 N  1 (15) ðÞðÞ multi p1 p2 where uðN Þ represents less than N and with N is a mutual positive integer. multi multi multi 3. Randomly find an integer E that satisfies the condition of uðÞ N multi gedðÞ E; uðÞ N ¼ 1 (16) multi 4. Finally get D from E and uðN Þ, then D must meet the following multi D  E  1mðÞ od uðÞ N (17) multi According to the previously presented steps, the RSA algorithm can obtain a pair of keys, where the public key is (E,N) and the private key is (D,N). Either the sender or the receiver must know that the public key is (E,N). 714 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 10. Hough Transform of the lane detection results. (a) Sunny, (b) Cloudy, (c) Raining, (d) Night. Figure 11. The lane detection results of Kalman filter. (a) Sunny, (b) Cloudy, (c) Raining, and (d) Night. Figure 12. Lane detection results for Kalman filter under the different noise. (a) Detection results of the car into gateway, (b) Detection results of rain and wipers, (c) Detection results of bridge shadow noise, (d) Detection result of forward vehicle shadow. Pai et al. 715 Table 1. Number of test samples for lane line detection. Test sample category Number of test samples Sunny lane samples 300 Cloudy lane samples 300 Night lane samples 300 Raining lane samples 300 Table 2. The correct rate of lane detection. Test category Hough transform Kalman filter Sunny test 87.667% 96.667% Cloudy test 88.000% 96.333% Night test 83.333% 92.233% Raining test 74.333% 91.000% Figure 13. The results of lane departure warning. (a) Sunny departure warning, (b) Cloudy departure warning, (c) Raining departure warning, and (d) Night departure warning. Meanwhile, the private key (D,N) can only be known by the receiver. After the completion of key pairing, only the data can be encrypted and decrypted. Experimental results The results of lane line detection through Hough transform In this study, the lane line detection experiment can be divided into four different environments, namely, sunny, cloudy, rainy, and night. As shown in Figure 10, the Hough transform lane line detection only has good results in lane line detection on sunny days. The Hough transform is susceptible to environmental factors compared with other environments, which makes the detection results unsatisfactory. Kalman filter for lane line detection results To improve the Hough transform lane line detection vulnerability to environmental factors, the Kalman filter was added to overcome this problem. The experimental results shown in Figure 11 indicate that using the Hough transform plus the Kalman filter for lane line detection provides a better result than using the Hough transform alone. This finding can be attributed to the fact that the Kalman filter can filter out the noise on the road and predict the location near the actual lane. 716 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 14. Vehicle training samples. Table 3. Number of vehicle training samples. Test sample category Number of test samples Sunny car samples 1000 Sunny no-car samples 500 Night car samples 1000 Night no-car samples 500 Table 4. The correct rate of SVM predict. Test category Correct rate Sunny test 92.258% Night test 95.598% As shown in Figure 12, the Kalman filter does not only exhibit a good performance in the four cases but can also accurately predict the actual location of the lane line when facing other noises. This good performance improved the vulnerability to environmental factors that affected lane line detection using the Hough transform. Table 1 shows the number of test samples used for lane line detection. This study excludes the detection of the situation where the vehicle is in front of the vehicle and the number of lane test samples is set to 300 frames. Table 2 also shows that the Kalman filter has the advantages of filtering out the noise and predicting the lane line over the Hough transform, so that it is tested during the rainy day. Different from the Hough transform, which is vulnerable to the effect of the environment, the Kalman filter significantly improves the correctness of the test results. The results of the lane departure warning This system is based on the position of the left and right lanes in the image on the screen to determine whether the current vehicle is offset or not. Several advantages of this method are that the system is easy to judge and the computational complexity of the system can be reduced. When the image goes through Kalman filter lane Pai et al. 717 Figure 15. The results of forward collision warning. (a) Daytime warning message and (b) Warning message at night. Figure 16. The door has been unlocked. Figure 17. The results of simulate of the rear drive. detection, the coordinates of the x-direction of the left and right lane lines can be known. When the right lane x- axis coordinate value is less than two thirds of the image width, the current vehicle is shifting to the right. Conversely, when the left lane x-axis coordinate value is greater than one third of the image width, the current vehicle is shifting to the left. When the vehicle continues to move to the right or left, the system will display a warning message on the screen of the smart glasses, as shown in Figure 13, to warn the driver that he/she should pay attention to the current traffic conditions. Experimental results of SVM front vehicle identification and collision avoidance warning In this study, the front vehicle ID is based on the use of LIBSVM open source machine learning library proposed by C. C. Chang and C. J. Lin for SVM training and prediction. To increase the adaptability and robustness of SVM forecasting, this study takes vehicle and non-vehicle images of the streets of Taiwan during the day and night. All of the images are captured by Harris corner detection to obtain the eigenvector, and the training samples are normalized to 50  45 pixels. Figure 14 shows the vehicle training samples used in this study. The training samples can be divided into two types, namely, car in day and car at night, with positive samples for the vehicle images and negative samples for the non-vehicle images. Table 3 shows the number of positive and negative samples used in this study. The type of SVM used in this study is known as the C-support vector classification, the kernel function is known as the radial basis function, the parameter C penalty is set as 2, and the parameter c in the kernel function 718 Journal of Low Frequency Noise, Vibration and Active Control 39(3) is set as 0.000488. The test sample set used to predict the results is shown in Table 4, with the daily ID accuracy rate of 92.258% and the night ID accuracy rate of 95.598%. The decision method that the system uses to determine whether the distance from the car is close is the size of the y-direction coordinates of vehicles in the image. When the SVM identifies that there is a vehicle in front, it will in turn determine the y-direction of the coordinates of the vehicle where the distance of the vehicle in front is greater than the y-axis coordinate value. By contrast, if the y-axis coordinate value continues to increase, then the distance from the front vehicle is close. Subsequently, the system will display a warning message on the screen to inform the driver to maintain the appropriate driving distance. Figure 15 shows the results of daytime and nighttime front anti-collision warning. Test results of on-board Internet of things This study develops an app on Android. When the program is turned on, the owner must first turn on the Bluetooth and Arduino connections to identify the action of the owner. After the system is identified successfully, it can now obtain the authority to control the vehicle, connect to the server to access the vehicle assistance system, and obtain the current information of the vehicle. Figure 16 shows that the system successfully identifies the owner. Then, the app interface will inform the owner that the vehicle has been unlocked. The blind spot detection system can help the driver overcome the blind spots on the field of vision, wherein the detection range in the rear side is 3 m. The blind spot detection system can prevent the occurrence of accidents when changing lanes because of the effect of blind spots behind the vehicle that the driver cannot pay attention to. Figure 17 shows the simulation employed to help the driver detect blind spots at the rear of the car. The smart glasses screen will remind the driver to pay attention to the rear of the car. Conclusion In this study, the design of a vehicle with advanced driving support function of the IoV system is based on the concept of the expansion of AR. The system can be divided into three parts. One part involves lane departure warning. In this study, the Hough transform is used to identify the location of the lane line from the ROI of the image. The Kalman filter is added to improve vulnerability of the Hough transform to environmental factors in identifying the actual lane line. Then, the lane departure decision is employed to determine whether the vehicle is offset or not. The experimental results show that the accuracy of the Kalman filter test is up to 90%. The second part involves anti-collision warning. In the daytime vehicle detection section, the Sobel edge detection result and the lane mask generated by lane mark detection are inputted into the AND operation. The unnecessary noise in the image is filtered out, and the vehicle hypothetical area is determined by scanning the vertical edge. Nighttime front vehicle detection is achieved by detecting the front car red light area and using the same lane mask to filter noise to identify the vehicle assumed area. Finally, the vehicle assumed area is acquired by Harris corner detection to obtain the vehicle characteristic parameters. Moreover, the SVM ID model trained by a large number of training samples is used to identify the vehicle. The correctness of the forecast result of the daytime vehicle test sample set is 92.258% and that of the nighttime vehicle test sample set is 95.598%. The last part is the IoVs. This study added the RSA algorithm to enhance information security on the Internet through public and private key pairing to determine the owner. After obtaining the authority to control the vehicle, the owner can now connect to the server side and access various sensors to detect vehicle information from the smart glasses. Finally, this study actualizes the image recognition part in the industrial computer and use of Arduino Yun to establish Internet of Things. Through the network transmission mode, the image identification results and vehicle information can be displayed on smart glasses to achieve the purpose of AR of the Internet of Things. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Pai et al. 719 ORCID iD Pi-Yun Chen http://orcid.org/0000-0002-1460-7116 References 1. Intelligent Transportation Society of America, http://www.itsa.org/, 2018. 2. Chellappa R, Qian G and Zheng Q. Vehicle detection and tracking using acoustic and video sensors. Proc IEEE Int’l Conf Acoust Speech Signal Process 2004; 3: 793–796. 3. He J, Rong H, Gong J, et al. A lane detection method for lane departure warning system. In: Processing of 2010 inter- national conference on optoelectronics and image, pp. 28–31, 2011. 4. Sun Z, Bebis G and Miller R. On-road vehicle detection: a review. IEEE Trans Patt Anal Mach Intellig 2006; 28: 694–711. 5. Chang HY, Fu CM and Huang CL. Real-time vision-based preceding vehicle tracking and recognition. In: Proceedings IEEE Int’l Conference on Intelligent Vehicles Symposium, pp. 514–519, 2005. 6. Sun Z, Miller R, Bebis G, et al. A real-time precrash vehicle detection system. IEEE Workshop of CV 2002; 171–176. 7. Parodi P and Piccioli G. A feature-based recognition scheme for traffic scenes. In: Proceedings of IEEE intelligent vehicles symposium, pp. 229–234, 1995. 8. Sun Z, Bebis G and Miller R. Monocular precrash vehicle detection: features and classifiers. IEEE Trans Image Process 2006; 15: 2019–2034. 9. Vapnik VN. An overview of statistical learning theory. IEEE Trans Neural Netw 1999; 10: 988–999. 10. Cristianini N and Shawe-Taylor J. An introduction to support vector machines and other Kernel-Based learning methods. Cambridge, UK: Cambridge Univ. Press, 2000. 11. Rivest RL, Shamir A and Adleman L. A method for obtaining digital signatures and public-key cryptosystem. Commun Acm 1978; 21: 120–126. 12. Ludeman LC. Random processes: filtering, estimation, and detection. NY: John Wiley,2003. 13. Lim KH, Seng KP, Ang LM, et al. Lane detection and Kalman-based linear-parabolic lane tracking. In: Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics, Vol. 2, pp. 351–354, 2009. 14. Cuevas E, Zaldivar D, Rojas R. Kalman filter for vision tracking. Technical Report B, Freie University Berlin, Berlin, pp. 05–12, 2005. 15. Li P, Huang Y and Yao K. Multi-algorithm Fusion of RGB and HSV Color Spaces for Image Enhancement. In Proceedings of the International Conference on 37th Chinese Control Conference (CCC), 2018, pp. 9584–9589. 16. Chen JG. Lane departure and forward collision warning systems based on video processing technology for handhold devices, Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Master Thesis, 2012. 17. Chen HW. Vision-based all-day vehicle detection using embedded system, Department of Electrical Engineering, National Chin-Yi University of Technology, Master thesis, 2010. 18. Chen JQ. Lane-based front vehicle detection and its acceleartion. Department of Computer Science and Engineering, National Sun Yat-sen University, Master Thesis, 2013. 19. Wu JK. Design and application of specific person tracking and posture recognition, Department of Electrical Engineering, National Chin-Yi University of Technology, Master thesis, 2015. 20. Rifin R and Klautau A. In defense of one-vs-all classification. J Mach Learn Res 2004; 5: 101–141. 21. Krebel UHG. Pairwise classification and support vector machines. In: Advances in Kernel methods: support vector learn- ings. Cambridge, MA: MIT Press, 1999, pp. 255–268. 22. Cortes C and Vapnik V. Support-vector networks. Mach Learn 1995; 20: 273–297; 2010; 2756–2759. 23. Boser BE, Guyon IM and Vapnik VN. A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory, pp. 144–152, 1992. 24. Scholkopf B and Smola AJ. Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press, 2001. 25. Chang CC and Lin CJ. LIBSVM: a library for support vector machines, http://www.csie.ntu.edu.tw/ cjlin/libsvm (accessed 4 March 2013). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png "Journal of Low Frequency Noise, Vibration and Active Control" SAGE

Realization of Internet of vehicles technology integrated into an augmented reality system:

Loading next page...
 
/lp/sage/realization-of-internet-of-vehicles-technology-integrated-into-an-1KoWkx3aNS

References (26)

Publisher
SAGE
Copyright
Copyright © 2022 by SAGE Publications Ltd unless otherwise noted. Manuscript content on this site is licensed under Creative Commons Licenses
ISSN
0263-0923
eISSN
2048-4046
DOI
10.1177/1461348419835054
Publisher site
See Article on Publisher Site

Abstract

The goal of this study is to develop an internet of vehicles system with augmented reality technology. The system deals mainly with three subjects, namely, lane departure warning, forward collision detection and warning, and internet of vehicles. First, to deal with the subject of lane departure warning, the Hough transform is used in this study to extract the possible positions of lane lines from the region of interest of an image. The Kalman filter is further employed to remove noises and estimate the actual positions of car lane lines. The lane departure decision is then used to determine whether a lane departure situation occurs. Second, the Sobel edge detector and taillight detection method are used to locate the hypothetical region of the vehicle. The characteristic parameters within the hypothetical region can also be obtained through the Harris corner detection method. To verify the hypothetical region and identify the vehicle, the support vector machine algorithm is used. The collision decision is then applied to determine whether the distance between two vehicles is short, thus fulfilling the goal of forward collision detection and warning. In addition, a secure and easy-to-use internet of vehicles is achieved with the use of the Rivest–Shamir–Adleman encryption algorithm, which uses public and secret keys to encrypt and decrypt messages to achieve the task of user identification. Upon obtaining control of the vehicle, the driver has full access to the most up-to-date information provided by the driver assistance system. Finally, internet of vehicles applications incorporating the previously mentioned methods, smart glasses, and augmented reality are implemented in this study. Smart glasses provide the drivers easy access to information about the vehicle and warnings, which helps enhance driver convenience and safety considerably. Keywords Augmented reality, Internet of vehicles, Hough transform, Kalman filter, Sobel edge detector, Harris corner detector, support vector machine, Rivest–Shamir–Adleman algorithm Introduction In recent years, industrial giants, such as Facebook, Microsoft, Samsung, and Google, have launched a new revolution in visual technology. In 2016, the major bright spot is the expansion of the augmented and virtual realities of science and technology. The numbers of participants using devices and the developers of software in this revolution are also increasing. Augmented reality (AR) is the combination of computer and real-world information; thus, users can obtain relevant information at the right place and time. Virtual reality (VR) is the ideal artificial environment on a computer wherein a virtual environment is created in a seemingly real or physical manner. In the VR, the operator can interact with the controller in the virtual environment. Meanwhile, in the Department of Electrical Engineering, National Chin-Yi University of Technology, Taiping Dist, Taichung, Taiwan Corresponding author: Pi-Yun Chen, Department of Electrical Engineering, National Chin-Yi University of Technology, No. 57, Sec 2, Zhongshan Rd, Taiping Dist, Taichung 41170, Taiwan. Email: chenby@ncut.edu.tw Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http://www. creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Pai et al. 705 Figure 1. Advanced driver assistance system. AR, the operator can interact with the real environment and expand information. Until now, AR and VR still lack the immersive feeling, but developers are not bothered by this as the purpose of AR and VR is to provide information. The intelligent transportation system (ITS) coordinates and integrates advanced electronic, computer, com- munication, and control technologies into the transportation system. The reliability and characteristics of the system have been tested in various areas at different levels, enhancing its safety and efficiency in instantly and neatly solving the problems often encountered in traffic. The ITS can be subdivided into five directions, namely, advanced traffic management system, advanced traveler information system, advanced vehicle control and safety system (AVCSS), advance public transportation system, and commercial vehicle operation. This study takes the AVCSS as an example, where the system covers the lane departure warning system and front-mounted detector of the anti-collision warning system, which is called the advanced driver assistance system (ADAS), as shown in Figure 1. The lane departure warning is an important function of the ADAS. This study uses a camera to capture the driving image and sets the image region of interest (ROI). The edge detection method determines the ROI and edge characteristics or uses scan lines to scan the lane line characteristics within the set ROI range to avoid driving in an offset lane situation under unconscious conditions. The lane line can be divided into two types, namely, 2,3 4 straight line and curve, indicating that the lane line can use the left and right lane information to determine whether the vehicle may or may not offset to issue a warning. For the front vehicle detection system, a previous study collected and classified the vehicle detection system based on image processing over the years and proposed the framework for the vehicle detection system. Since then, many studies have been conducted on the basis of this framework. Front vehicle detection can be divided into two phases, namely, hypothesis generation (HG) and hypothesis verification (HV). HG is used to identify the possible target vehicle, whereas HV is used to verify the correctness of the target. Two methods summarized in the literature, namely, template-based and appearance-based prediction methods, are also used. The template-based 6,7 prediction method is used for vehicles with obvious characteristics, such as the clear horizontal and vertical edges of the vehicle, the shadow at the bottom of the vehicle, and the U-shaped features formed between bottom and rear of the vehicle. This method has the advantages of quick detection of the target and low computation amount; however, it is susceptible to interference from surrounding environment and obstacles. The appearance- 8–10 based prediction method involves learning through a machine to determine whether the front vehicle is in hypothetical area. This method trains a classifier with a large number of training materials. The training materials contain positive samples for vehicle images and negative samples for non-vehicle images. In recent years, as the Internet of things (IoT) has just started to develop its communication transmission security and identity authentication, it has become an easy target for hacking because of its simplicity as compared with the ordinary computer. Therefore, to prevent this situation, this study also utilized the Rivest–Shamir– Adleman (RSA) encryption algorithm. In this application, before the owner gets into his/her car and receives 706 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Smart Glasses Side Start Vehicle Control Side Scan the Encrypted Decrypted the Instruction Instruction If decrypted If decrypted N successfully successfully? Identify Error Send the Obtain the Encrypted Control Access Instruction Log in Server Obtain the Lane Departure Forward Collision Sensor Data Warning System Warning System End Figure 2. System architecture. vehicle information, he/she needs to scan the vehicle identification (ID). Then, the RSA encryption and decryption methods will pair the control command. After successful matching, the owner can obtain full authority to control the vehicle. System structure The system architecture can be divided into three main parts, namely, lane detection collision prevention warning, front vehicle detection anti-collision warning, and onboard Internet of Vehicles (IoV). Figure 2 shows the flow- chart of the use of smart glasses in the onboard IoV system to scan the vehicle’s ID and identify the owner. When the owner has been identified, the owner can log in to the server, use different vehicle functions, such as lane departure detection warning system and front vehicle detection anti-collision warning system, and access the current vehicle’s information. Lane departure detection warning In the beginning of this study, the range of the search route, which is either the full screen or the local picture of interest, is set and the information on the color space in the ROI is transformed into HSV from RGB through grayscale image processing. After grayscale image processing, the edge detection method, which can reduce the computational complexity, is used to determine the characteristics of the lane line. However, in this study, after determining the characteristics of the lane line, Hough transform is used to identify the starting point of the left Pai et al. 707 Region of Image Pre- Edge Image Input Interest processing Detection Hough Kalman Departure Warning Transform Filter Decision Figure 3. Lane departure warning flow chart. Figure 4. The detection results of left and right lane edge. and right lane lines. After recording these two points, the information of these coordinates is inputted into the 12–14 Kalman filter to conduct the test to determine the actual lane line position. Finally, the lane departure decision is used in this study to determine whether the vehicle is offset or not. The lane line offset warning process is shown in Figure 3. Region of interest and conversion of color space The purpose of defining the image region of interest (ROI) is to reduce the amount of system operations and improve the efficiency of lane detection. In this paper, the horizon in the input image is used as the upper boundary of the ROI because the area above the horizon is the sky which represents the non-road region. In reality, the vehicles will not appear here. The lower boundary is measured for about five meters away from the vehicle in front. In addition to traffic congestion, under the normal driving condition, the vehicle in front will not appear in this area. At present, most of the images are using RGB color space. It is easily affected by the light source when the input image is RGB color space. Therefore, this paper has needed to convert RGB images to HSV color space when processing the images. In HSV color space, H (Hue) represents hue, S (Saturation) represents saturation, and V (Value) represents brightness. The HSV has separated the hue, saturation, and brightness which are the three information of the image. So, when processing the images, it will have better performance when there is interference with background noise. Any pixel in the RGB image corresponds to the Hp value of HSV [15], and its formula is as follows: (1): 8 9 < ½ ðÞ ðÞ = 0:5 RG þ RB H ¼ cos qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1) : ; ðÞ ðÞðÞ RG þ RB GB H if BG H ¼ (2) 360H ; otherwise p 708 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Vertical and Hypothesis Hypothesis Horizontal Image Input Generation Verification Edge Detection Collision Warning Decision N Y Figure 5. Forward collision warning flow chart. where R,G, and B represents the RGB values of the pixels, and the actual H value can be adjusted by comparing the size of B and G using equation (2). When B is less than or equal to G, H is equal to the Hp value; otherwise, when B is greater than G, H value is equal to 360 minus the Hp value of equation (3). MaxðÞ R; G; B MinðÞ R; G; B S ¼ (3) MaxðÞ R; G; B MaxðÞ R; G; B V ¼ (4) Then, the values of S and V in HSV are calculated by using equations (3) and (4), where Max is the maximum value of the three values of R, G, and B, while Min is the minimum value of the three values of R, G, and B. After obtaining the H, S, V values of the HSV color space, this paper used V (Brightness), which is the gray scale image, to perform the subsequent image processing. Thus, it can reduce the calculation of the image dimension and also improve the efficiency of the system operation. Edge detection In the general highway driving screen, the difference between left and right sides of the lane line in the image is approximately 25 . Nevertheless, this study applies the oblique edge detection operator at 45 and 135 to conduct the edge detection operation on the left and right of the image ROI. The edge detection operator is shown in Figure 4. Several advantages of the oblique edge operator are that the structure is simple and the two pixels in the mask can be subtracted to obtain the edge point, which saves time for image calculation. Figure 4(a) shows the detection result of the right lane line using the 45 edge operator, whereas Figure 4(b) shows the detection result of the left lane line using the 135 edge operator. Lane departure decision In the lane departure decision, this study ruled out the situation in which a signal light is used. When the vehicle is not going in a straight direction while on the road but continues to shift slowly toward the left or right, this action is considered an occurrence of lane departure. This situation will affect not only the rear vehicle’s driver but also the driver him/herself and even endanger life. In this study, the departure decision is expressed in equation (1), where L is the starting point position of the left lane line, R is the starting point position of the right lane line, and width is the image width. width > true; if L > Lane Warning ¼ (5) width if R < 2 > 3 false; otherwise Pai et al. 709 Front vehicle detection and crash warning When the front vehicle is detected, the result of the vehicle traffic image taken in the third chapter and the detection of the lane line are taken as the inputs of vehicle crash warning. First, the ROI of the vehicle traffic image is detected vertically and horizontally and HG is conducted. Then, the support vector machine (SVM) is used to verify the HG part, so that the system automatically determines whether there is a vehicle in front or not. This step is called the HV. After detecting the vehicle in front, the collision prevention decision is made to determine whether there is a situation in which the vehicle is close. If such a situation occurs, then the system will issue a warning to remind the driver to maintain the appropriate driving distance. The front vehicle collision warning flowchart is presented in Figure 5. Vanishing point detection and lane masks In this paper, the lane line position coordinate and the slope of the straight line can be obtained through the calculation method presented in the previous chapter, i.e., Hough transform combined with the Kalman filter. Then, equation (6) is used to identify the intersection point of the two lines of infinite extension, which is also known as the vanishing point I m  I m l r r l V ¼ (6) I  I l r where I and m represent the intercept and slope of the left lane, respectively, I and m represent the intercept and l l r r slope of the left lane, respectively, and V is the Y coordinate of the intersection of two lanes. The lane mask is established to identify the two lane line positions and vanishing point to form a triangular region, which is searched by scanning from the center of the image to the left and right sides. In this study, the lane pixel is set to 255 for the edge that has not yet been scanned in the ROI. The remaining lane pixel is set to 0 after the lane edge is scanned, which will be subsequently used to create a triangular lane mask. Day foreground target capture As the vehicle is a regular-shaped object, it has a strong edge characteristic when driving on the road. This study uses this feature as a basis for the detection of the front vehicle and location. In general, the rear of the vehicle has obvious horizontal and vertical characteristics. Thus, this study first uses the Sobel edge detection operator to obtain the gray contour image of the target contour. Then, Otsu’s threshold method (also called Otsu’s method) is used to separate the target contour from the background to obtain the foreground goal. Sobel edge detection is used to enhance the horizontal and vertical characteristics, as expressed in equation (7) N N a b X X Sx; y ¼ wðÞ i; j PxðÞ þ i; y þ j (7) ðÞ s i¼N j¼N a b where Sx; y is the image of the Sobel edge detection output, P is the image of the size f  f , w ði; jÞ is the filter ðÞ l w s mask of the size m  n, and the center position of the mask moves with the x- and y-axes of the image. Otsu’s method is an adaptive binary method. Although this method takes more time to calculate and generate statistics, this method has good adaptability. The appropriate threshold can be obtained on the basis of the changes in the brightness and image background. The aim is to separate the background from the foreground of the image to achieve the goal of our interest. Histogram statistical analysis is employed to assign the pixel with a value greater than the threshold as the foreground and the pixel with a value lesser than the threshold as the background. Then, the smallest variance of the foreground and background of the two groups is calculated and the two values are added to obtain the best threshold. Pavement noise filtering In this study, to eliminate the pavement noise, first, we observe the changes in the gray value of the pavement. After sampling the road image, we calculate the average grayscale value and use it as the threshold value of the binary image. Finally, we expand the binary image to highlight the high luminance region. The expansion 710 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 6. The result of lane noise is removed by Subtraction operation. Figure 7. Using AND operation to obtain the forward vehicle edge contour. operation used in this study is a method widely employed in morphology, which can remove noise from the image, reach the connection gap, and repair the function of the broken image. The purpose of using the expansion operation in capturing the daytime foreground target is not to connect or repair the broken image but to expand the high brightness area to achieve the purpose of removing pavement noise. Pai et al. 711 Figure 8. Fetch foreground target at night. Figure 9. Scan the vehicle’s vertical edges. Finally, in this study, the images obtained using Otsu’s method and the expansion operation are subjected to conditional image subtraction processing, as presented in equation (8). Hence, the driver will be able to isolate the vehicle from the noise while on the road, thereby reducing the interference of the road area and ensuring that the obvious edge of the vehicle is easily detected. Figure 6 shows the results of the conditional subtraction operation of two images. As shown in the figure, the subtraction operation nearly filtered out the noise on the pavement and only left the features of contours of the vehicle tail. if O x; y ¼¼ 255 && D x; y ¼¼ 0 ðÞ ðÞ NRI x; y ¼¼ 255 ðÞ (8) Else NRI x; y ¼¼ 0 ðÞ 712 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Finally, this study uses the AND operation for the lane mask image to remove the pavement noise and obtain the contours of the front vehicle in the lane, as shown in Figure 7. Night foreground target capture When the vehicle is traveling at night, the vehicle edge characteristics cannot be detected by the edge detection methods because the light is not as adequate as in the daytime. However, the most obvious vehicle characteristic at night is the rear light of the vehicle. Therefore, this study uses the method that can detect the rear light of the vehicle to capture the foreground target. First of all, the method previously mentioned in Region of interest and conversion of color space section is used to establish the ROI image and change its RGB color space to the HSV color space. Then, the red rear lights and the rear light produced by the red halo are used to easily identify the characteristics and set the values for the three HSV components, so that only the red target is detected. Subsequently, the expansion and erosion treatments are applied to connect the broken foreground target and remove unnecessary noise. In this manner, the red light block can be detected. Finally, the detected car light’s block and the lane line generated by the lane masks are used to conduct the AND operation. The detection results of the lights are shown in Figure 8. Extraction of vehicle edge feature In this study, the edge of the image is scanned using the edge point scanning approach. The scanned vertical edge point of the output image is shown in Figures 7 and 8. The scanning direction is from the center of the image to the left and right sides of the x-axis, as shown in Figure 9. First, a vertical array with an initial value of 0 is set and the red dot is assumed to be the vertical edge feature of the vehicle. When scanning toward the vertical edge feature, the initial vertical array value will change from 0 to 1. When the number of vertical arrays reaches a certain percentage, the value will change from 0 to 1. Then, the x-axis coordinates will be considered the edge of the vehicle. However, the effect is unsatisfactory when scanning the edge because of the environmental factor interference. Therefore, this study determined whether the width of the left and right sides of the vehicle is within the reasonable range or not. In this study, the national road lane is used as an example. The width of the vehicle is approximately 0.45–0.75 times the width of the lane. Thus, if the left and right vehicle edge points do not meet the limit, then no hypothetical judgment can be made. The reasonable width of the vehicle is determined using equations (9) and (10) C ¼ E  E (9) width right left 0:45W < C < 0:75W (10) lane width lane where E is the x-axis coordinate value of the right edge of the vehicle, E is the x-axis coordinate value of the right left left edge of the vehicle, W is the lane width, and C is the edge width of the bottom of the vehicle obtained by lane width subtracting the left and right vehicle edge coordinates. In general, the aspect ratio of the small car is 1:0.8, so that the width of the upper edge of the vehicle can be obtained by multiplying 0.8 of the edge width of the bottom of the vehicle. After obtaining the vehicle image region, the HV region can be achieved. Hypothesis verification (HV) 19–24 Support vector machines (SVM). The main idea of the SVM is to identify the best hyperplane on the basis of the eigenvalues of the classification in a data set composed of different categories. In this case, different data can be sorted and the distance between data edge hyperplanes can be maximized. The SVM can be divided into three categories, namely, linear separable, linear inseparable, and nonlinear. The linear separable SVM is analyzed for a linear system. For the linear inseparable and nonlinear SVM, the linear inseparable or nonlinear samples in the low dimensional space are raised to the high dimensional space to make it linearly separable, which allows the nonlinear system to be analyzed in the same manner as the linear system. This study uses the SVM as a classifier. After the SVM learns through machine learning, it will have the capability to identify the input and determine whether there is a vehicle at the hypothetical area or not. This study uses the radial basis function, which is also most commonly used in the SVM. Pai et al. 713 Front vehicle collision decision. In this study, the use of the hazardous area facilitates the general user to operate on a collision warning basis. According to the result of front vehicle detection in the previous section, when close to the front vehicle, the bottom of the vehicle will be close to the bottom of the image below the screen. By contrast, when the distance of the front vehicle is far from the bottom of the screen, the position of the bottom edge is close. Therefore, through this feature, a determinant (11) has been established. height  V true; if car > þ V y y Car Warning ¼ (11) false; otherwise The system will issue a warning when the bottom position of the current vehicle is in a hazardous area, where car is used to determine the front position of the y-axis coordinates, height denotes the image height, and V y y denotes the left and right lane line intersection y-axis coordinates. Internet of things information security RSA The RSA cryptographic algorithm is a public key encryption system published by Professors Rivest, Shamir, and Adleman from the Massachusetts Institute of Technology in 1978, whose system is based on factorization as a basis for the design of a set of encryption systems. The RSA encryption algorithm for the main encryption and decryption is expressed in equations (12) and (13) C ¼ MðÞ mod N (12) ðÞ M ¼ C mod N (13) where M is the data to be transmitted or the control instruction and C is the encrypted control instruction. However, before conducting the encryption operation, the public and private keys must be generated, as expressed in equations (14)–(17): 1. Arbitrarily selected prime numbers N , N , and calculate the N value p1 p2 multi N ¼ N  N and N 6¼ N (14) multi p1 p2 p1 p2 where N is the product of two numbers, and N is not equal to N multi p1 p2 2. Calculate Euler’s Totient uðÞ N ¼ N  1 N  1 (15) ðÞðÞ multi p1 p2 where uðN Þ represents less than N and with N is a mutual positive integer. multi multi multi 3. Randomly find an integer E that satisfies the condition of uðÞ N multi gedðÞ E; uðÞ N ¼ 1 (16) multi 4. Finally get D from E and uðN Þ, then D must meet the following multi D  E  1mðÞ od uðÞ N (17) multi According to the previously presented steps, the RSA algorithm can obtain a pair of keys, where the public key is (E,N) and the private key is (D,N). Either the sender or the receiver must know that the public key is (E,N). 714 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 10. Hough Transform of the lane detection results. (a) Sunny, (b) Cloudy, (c) Raining, (d) Night. Figure 11. The lane detection results of Kalman filter. (a) Sunny, (b) Cloudy, (c) Raining, and (d) Night. Figure 12. Lane detection results for Kalman filter under the different noise. (a) Detection results of the car into gateway, (b) Detection results of rain and wipers, (c) Detection results of bridge shadow noise, (d) Detection result of forward vehicle shadow. Pai et al. 715 Table 1. Number of test samples for lane line detection. Test sample category Number of test samples Sunny lane samples 300 Cloudy lane samples 300 Night lane samples 300 Raining lane samples 300 Table 2. The correct rate of lane detection. Test category Hough transform Kalman filter Sunny test 87.667% 96.667% Cloudy test 88.000% 96.333% Night test 83.333% 92.233% Raining test 74.333% 91.000% Figure 13. The results of lane departure warning. (a) Sunny departure warning, (b) Cloudy departure warning, (c) Raining departure warning, and (d) Night departure warning. Meanwhile, the private key (D,N) can only be known by the receiver. After the completion of key pairing, only the data can be encrypted and decrypted. Experimental results The results of lane line detection through Hough transform In this study, the lane line detection experiment can be divided into four different environments, namely, sunny, cloudy, rainy, and night. As shown in Figure 10, the Hough transform lane line detection only has good results in lane line detection on sunny days. The Hough transform is susceptible to environmental factors compared with other environments, which makes the detection results unsatisfactory. Kalman filter for lane line detection results To improve the Hough transform lane line detection vulnerability to environmental factors, the Kalman filter was added to overcome this problem. The experimental results shown in Figure 11 indicate that using the Hough transform plus the Kalman filter for lane line detection provides a better result than using the Hough transform alone. This finding can be attributed to the fact that the Kalman filter can filter out the noise on the road and predict the location near the actual lane. 716 Journal of Low Frequency Noise, Vibration and Active Control 39(3) Figure 14. Vehicle training samples. Table 3. Number of vehicle training samples. Test sample category Number of test samples Sunny car samples 1000 Sunny no-car samples 500 Night car samples 1000 Night no-car samples 500 Table 4. The correct rate of SVM predict. Test category Correct rate Sunny test 92.258% Night test 95.598% As shown in Figure 12, the Kalman filter does not only exhibit a good performance in the four cases but can also accurately predict the actual location of the lane line when facing other noises. This good performance improved the vulnerability to environmental factors that affected lane line detection using the Hough transform. Table 1 shows the number of test samples used for lane line detection. This study excludes the detection of the situation where the vehicle is in front of the vehicle and the number of lane test samples is set to 300 frames. Table 2 also shows that the Kalman filter has the advantages of filtering out the noise and predicting the lane line over the Hough transform, so that it is tested during the rainy day. Different from the Hough transform, which is vulnerable to the effect of the environment, the Kalman filter significantly improves the correctness of the test results. The results of the lane departure warning This system is based on the position of the left and right lanes in the image on the screen to determine whether the current vehicle is offset or not. Several advantages of this method are that the system is easy to judge and the computational complexity of the system can be reduced. When the image goes through Kalman filter lane Pai et al. 717 Figure 15. The results of forward collision warning. (a) Daytime warning message and (b) Warning message at night. Figure 16. The door has been unlocked. Figure 17. The results of simulate of the rear drive. detection, the coordinates of the x-direction of the left and right lane lines can be known. When the right lane x- axis coordinate value is less than two thirds of the image width, the current vehicle is shifting to the right. Conversely, when the left lane x-axis coordinate value is greater than one third of the image width, the current vehicle is shifting to the left. When the vehicle continues to move to the right or left, the system will display a warning message on the screen of the smart glasses, as shown in Figure 13, to warn the driver that he/she should pay attention to the current traffic conditions. Experimental results of SVM front vehicle identification and collision avoidance warning In this study, the front vehicle ID is based on the use of LIBSVM open source machine learning library proposed by C. C. Chang and C. J. Lin for SVM training and prediction. To increase the adaptability and robustness of SVM forecasting, this study takes vehicle and non-vehicle images of the streets of Taiwan during the day and night. All of the images are captured by Harris corner detection to obtain the eigenvector, and the training samples are normalized to 50  45 pixels. Figure 14 shows the vehicle training samples used in this study. The training samples can be divided into two types, namely, car in day and car at night, with positive samples for the vehicle images and negative samples for the non-vehicle images. Table 3 shows the number of positive and negative samples used in this study. The type of SVM used in this study is known as the C-support vector classification, the kernel function is known as the radial basis function, the parameter C penalty is set as 2, and the parameter c in the kernel function 718 Journal of Low Frequency Noise, Vibration and Active Control 39(3) is set as 0.000488. The test sample set used to predict the results is shown in Table 4, with the daily ID accuracy rate of 92.258% and the night ID accuracy rate of 95.598%. The decision method that the system uses to determine whether the distance from the car is close is the size of the y-direction coordinates of vehicles in the image. When the SVM identifies that there is a vehicle in front, it will in turn determine the y-direction of the coordinates of the vehicle where the distance of the vehicle in front is greater than the y-axis coordinate value. By contrast, if the y-axis coordinate value continues to increase, then the distance from the front vehicle is close. Subsequently, the system will display a warning message on the screen to inform the driver to maintain the appropriate driving distance. Figure 15 shows the results of daytime and nighttime front anti-collision warning. Test results of on-board Internet of things This study develops an app on Android. When the program is turned on, the owner must first turn on the Bluetooth and Arduino connections to identify the action of the owner. After the system is identified successfully, it can now obtain the authority to control the vehicle, connect to the server to access the vehicle assistance system, and obtain the current information of the vehicle. Figure 16 shows that the system successfully identifies the owner. Then, the app interface will inform the owner that the vehicle has been unlocked. The blind spot detection system can help the driver overcome the blind spots on the field of vision, wherein the detection range in the rear side is 3 m. The blind spot detection system can prevent the occurrence of accidents when changing lanes because of the effect of blind spots behind the vehicle that the driver cannot pay attention to. Figure 17 shows the simulation employed to help the driver detect blind spots at the rear of the car. The smart glasses screen will remind the driver to pay attention to the rear of the car. Conclusion In this study, the design of a vehicle with advanced driving support function of the IoV system is based on the concept of the expansion of AR. The system can be divided into three parts. One part involves lane departure warning. In this study, the Hough transform is used to identify the location of the lane line from the ROI of the image. The Kalman filter is added to improve vulnerability of the Hough transform to environmental factors in identifying the actual lane line. Then, the lane departure decision is employed to determine whether the vehicle is offset or not. The experimental results show that the accuracy of the Kalman filter test is up to 90%. The second part involves anti-collision warning. In the daytime vehicle detection section, the Sobel edge detection result and the lane mask generated by lane mark detection are inputted into the AND operation. The unnecessary noise in the image is filtered out, and the vehicle hypothetical area is determined by scanning the vertical edge. Nighttime front vehicle detection is achieved by detecting the front car red light area and using the same lane mask to filter noise to identify the vehicle assumed area. Finally, the vehicle assumed area is acquired by Harris corner detection to obtain the vehicle characteristic parameters. Moreover, the SVM ID model trained by a large number of training samples is used to identify the vehicle. The correctness of the forecast result of the daytime vehicle test sample set is 92.258% and that of the nighttime vehicle test sample set is 95.598%. The last part is the IoVs. This study added the RSA algorithm to enhance information security on the Internet through public and private key pairing to determine the owner. After obtaining the authority to control the vehicle, the owner can now connect to the server side and access various sensors to detect vehicle information from the smart glasses. Finally, this study actualizes the image recognition part in the industrial computer and use of Arduino Yun to establish Internet of Things. Through the network transmission mode, the image identification results and vehicle information can be displayed on smart glasses to achieve the purpose of AR of the Internet of Things. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Pai et al. 719 ORCID iD Pi-Yun Chen http://orcid.org/0000-0002-1460-7116 References 1. Intelligent Transportation Society of America, http://www.itsa.org/, 2018. 2. Chellappa R, Qian G and Zheng Q. Vehicle detection and tracking using acoustic and video sensors. Proc IEEE Int’l Conf Acoust Speech Signal Process 2004; 3: 793–796. 3. He J, Rong H, Gong J, et al. A lane detection method for lane departure warning system. In: Processing of 2010 inter- national conference on optoelectronics and image, pp. 28–31, 2011. 4. Sun Z, Bebis G and Miller R. On-road vehicle detection: a review. IEEE Trans Patt Anal Mach Intellig 2006; 28: 694–711. 5. Chang HY, Fu CM and Huang CL. Real-time vision-based preceding vehicle tracking and recognition. In: Proceedings IEEE Int’l Conference on Intelligent Vehicles Symposium, pp. 514–519, 2005. 6. Sun Z, Miller R, Bebis G, et al. A real-time precrash vehicle detection system. IEEE Workshop of CV 2002; 171–176. 7. Parodi P and Piccioli G. A feature-based recognition scheme for traffic scenes. In: Proceedings of IEEE intelligent vehicles symposium, pp. 229–234, 1995. 8. Sun Z, Bebis G and Miller R. Monocular precrash vehicle detection: features and classifiers. IEEE Trans Image Process 2006; 15: 2019–2034. 9. Vapnik VN. An overview of statistical learning theory. IEEE Trans Neural Netw 1999; 10: 988–999. 10. Cristianini N and Shawe-Taylor J. An introduction to support vector machines and other Kernel-Based learning methods. Cambridge, UK: Cambridge Univ. Press, 2000. 11. Rivest RL, Shamir A and Adleman L. A method for obtaining digital signatures and public-key cryptosystem. Commun Acm 1978; 21: 120–126. 12. Ludeman LC. Random processes: filtering, estimation, and detection. NY: John Wiley,2003. 13. Lim KH, Seng KP, Ang LM, et al. Lane detection and Kalman-based linear-parabolic lane tracking. In: Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics, Vol. 2, pp. 351–354, 2009. 14. Cuevas E, Zaldivar D, Rojas R. Kalman filter for vision tracking. Technical Report B, Freie University Berlin, Berlin, pp. 05–12, 2005. 15. Li P, Huang Y and Yao K. Multi-algorithm Fusion of RGB and HSV Color Spaces for Image Enhancement. In Proceedings of the International Conference on 37th Chinese Control Conference (CCC), 2018, pp. 9584–9589. 16. Chen JG. Lane departure and forward collision warning systems based on video processing technology for handhold devices, Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Master Thesis, 2012. 17. Chen HW. Vision-based all-day vehicle detection using embedded system, Department of Electrical Engineering, National Chin-Yi University of Technology, Master thesis, 2010. 18. Chen JQ. Lane-based front vehicle detection and its acceleartion. Department of Computer Science and Engineering, National Sun Yat-sen University, Master Thesis, 2013. 19. Wu JK. Design and application of specific person tracking and posture recognition, Department of Electrical Engineering, National Chin-Yi University of Technology, Master thesis, 2015. 20. Rifin R and Klautau A. In defense of one-vs-all classification. J Mach Learn Res 2004; 5: 101–141. 21. Krebel UHG. Pairwise classification and support vector machines. In: Advances in Kernel methods: support vector learn- ings. Cambridge, MA: MIT Press, 1999, pp. 255–268. 22. Cortes C and Vapnik V. Support-vector networks. Mach Learn 1995; 20: 273–297; 2010; 2756–2759. 23. Boser BE, Guyon IM and Vapnik VN. A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory, pp. 144–152, 1992. 24. Scholkopf B and Smola AJ. Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press, 2001. 25. Chang CC and Lin CJ. LIBSVM: a library for support vector machines, http://www.csie.ntu.edu.tw/ cjlin/libsvm (accessed 4 March 2013).

Journal

"Journal of Low Frequency Noise, Vibration and Active Control"SAGE

Published: Mar 19, 2019

Keywords: Augmented reality; Internet of vehicles; Hough transform; Kalman filter; Sobel edge detector; Harris corner detector; support vector machine; Rivest–Shamir–Adleman algorithm

There are no references for this article.