Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground Sign System
Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground Sign System
Brukarczyk, Bartłomiej;Nowak, Dariusz;Kot, Piotr;Rogalski, Tomasz;Rzucidło, Paweł
2021-06-16 00:00:00
aerospace Article Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground Sign System Bartłomiej Brukarczyk, Dariusz Nowak, Piotr Kot, Tomasz Rogalski * and Paweł Rzucidło Avionics and Control Department, Faculty of Mechanical Engineering and Aeronautics, Rzeszow University of Technology, Aleja Powstancow Warszawy 12, 35-959 Rzeszow, Poland; b.brukarczyk@prz.edu.pl (B.B.); darnow@prz.edu.pl (D.N.); p_kot@prz.edu.pl (P.K.); pawelrz@prz.edu.pl (P.R.) * Correspondence: orakl@prz.edu.pl; Tel.: +48-178-544-319 Abstract: The paper presents automatic control of an aircraft in the longitudinal channel during automatic landing. There are two crucial components of the system presented in the paper: a vision system and an automatic landing system. The vision system processes pictures of dedicated on- ground signs which appear to an on-board video camera to determine a glide path. Image processing algorithms used by the system were implemented into an embedded system and tested under laboratory conditions according to the hardware-in-the-loop method. An output from the vision system was used as one of the input signals to an automatic landing system. The major components are control algorithms based on the fuzzy logic expert system. They were created to imitate pilot actions while landing the aircraft. Both systems were connected with one another for cooperation and to control an aircraft model in a simulation environment. Selected results of tests presenting control efficiency and precision are shown in the final section of the paper. Keywords: aircraft control; automatic landing; airfield and runway ground signs; automatic flight Citation: Brukarczyk, B.; Nowak, D.; Kot, P.; Rogalski, T.; Rzucidło, P. Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground 1. Introduction Sign System. Aerospace 2021, 8, 167. Systems of various types of Unmanned Aerial Vehicles (UAV), including fixed wings [1–3], https://doi.org/10.3390/ multirotor [4–6] and other hybrid type aircraft [7,8] are increasingly being used in both aerospace8060167 military and civilian applications. An increasing number of flying platforms, greater availability of entire UAV systems and new types of missions, make issues related to the Academic Editor: Mario Innocenti full automation of the entire flight, including its terminal phases, more and more important. This is especially true of the landing phase, which is the most critical phase of each Received: 26 April 2021 flight and a subject of interest for much research. To some degree, it determines the type of Accepted: 10 June 2021 functions an automatic flight control system capable of executing this phase should offer. Published: 16 June 2021 Moreover, both the type and class of the aircraft have an impact on the type of on-board equipment and airfield infrastructure necessary for a safe automatic landing [9,10]. Publisher’s Note: MDPI stays neutral When analyzing the operation of autopilots offering the automatic landing function, with regard to jurisdictional claims in the control algorithms supporting this phase can be, in general, considered on two lev- published maps and institutional affil- els [9–11]. The first covers low level functions responsible for the stabilization of angles of iations. aircraft attitude and flight speed. They are used as a tool by the functions of the second, master level, which in turn, guides the aircraft along the desired landing trajectory. Currently, for manned aircraft as well as for MALE (Medium Altitude Long En- durance) or HALE (High Altitude Long Endurance) UAV class aircraft, an automatic Copyright: © 2021 by the authors. landing is generally supported by extra aerodrome infrastructure based on ILS (Instrument Licensee MDPI, Basel, Switzerland. Landing System), occasionally MLS (Microwave Landing System), or GBAS (Ground-Based This article is an open access article Augmentation System), which has an alternative nowadays [12]. Some vendors develop distributed under the terms and their own specific systems supporting this phase, e.g., the mobile MAGIC ATOLS system. conditions of the Creative Commons These are short-range radio navigation based systems and are composed of on-ground Attribution (CC BY) license (https:// and airborne components. Both a technological development of digital optical systems creativecommons.org/licenses/by/ and progress in picture processing algorithms cause vision systems to enter this field 4.0/). Aerospace 2021, 8, 167. https://doi.org/10.3390/aerospace8060167 https://www.mdpi.com/journal/aerospace Aerospace 2021, 8, 167 2 of 21 efficiently. Optical systems have recently become available for helicopters and multirotors, e.g., systems capable of identifying the ‘H’ letter sign on the ground while hovering over it. They were a catalyst for the research presented in this paper. In fact, the automatic landing of mini or even MALE-class fixed-wing aircraft can be performed with relatively primitive or with some simple dedicated infrastructure. The only condition is that proper information on an aircraft’s relative position to the runway must be available [13,14]. From this perspective, it seems reasonable to develop solutions which could support the automatic landing of such aircraft in terms of reduced ground facilities. The simple and easy-to-install ground sign system could transform any auricular or other area into an airfield capable of hosting automatically-landing fixed-wing UAVs. An automatic landing requires proper information about the position of the aircraft in relation to the desired landing trajectory, or at the very least to the theoretical touchdown point [15]. This data is usually provided by landing assist systems (e.g., ILS, satellite systems) or could be achieved by utilizing visual signals coming from systems which typically assist pilots of manned aircraft [16–20] as well. In this case, it is obvious that to extract proper information from visual signals, some image processing methods must be employed. It should be mentioned here that image processing has already been successfully applied both for the identification of some aircraft flight parameters and determining the aircraft attitude [21], and also during the landing phase [22]. One of the systems whose visual signals have already been used to determine an aircraft’s relative position to the landing trajectory and deployed by a control system is the PAPI (Precision Approach Path Indicator) light system [23]. However, small aircraft most often operate at airfields which are not equipped with any ground infrastructure or marked with any airfield ground signs. Additionally, these only play the role of airfields occasionally, so it seems reasonable to develop a system which could both determine an aircraft’s relative position to the desired glide path and touchdown point and provide signals to control algorithms sufficient for automatic landing based on image processing of a simple ground signs system located near to the runway threshold. The paper presents a concept for a longitudinal channel aircraft flight control system, which uses functions of the conventional autopilot in combination with information ob- tained from dedicated ground sign image analysis and dedicated control algorithms based on expert knowledge. It is designated to imitate the behavior of a human operator who, while piloting the aircraft, maintains the correct approach path, performs the flare phase, leading the aircraft down to touch the runway (Figure 1a). The pilot type of control is set as a reference point in some way, because of one key feature; however, the accuracy of pilot control is often lower in comparison with the accuracy of automatic control. Never- theless, the most significant advantage is that the pilot control is much more robust and disturbance-proof. The pilot can accomplish the landing even in terms of very residual and dubiously accurate, or reliable data on the aircraft’s position relative to the runway. A method employed to the development of the control algorithms should take into consideration an overall goal of the control, key features of both inputs and outputs, as well as the approach to defining dependencies existing in the control rules. Since the algorithms used in the control system are to imitate the pilot’s control, the method should provide any mean linking the real sample data characterizing the landing, the way human operators assess key flight parameters, and expert knowledge. Because of this, using some kind of expert system seems to be the most reasonable. Additionally, the human operators use linguistic terms (variables) to describe the flight parameters and magnitudes of the controls. A natural linkage between linguistic variables and numerical values exists in fuzzy logic or fuzzy neural systems. Unfortunately the second of those requires a huge training data set, and the results of learning are not always reliable [24,25]. Consequently, the fuzzy logic expert system approach was selected to develop the control algorithms. Aerospace 2021, 8, 167 3 of 21 Figure 1. (a) General schematic of a typical VFR landing. (b) The schematics dedicated runway ground signs configuration. (c) Sample image of dedicated ground signs seen by on board camera. 2. Ground Signs System During a VFR (Visual Flight Rules) landing, pilots of manned aircraft observe the runway as well as characteristic objects at the airfield and its surroundings, their relative position and orientation to the aircraft, and indications given by key on-board instruments. In this way, they are able to build a mental picture of an entire landing, correctly catching and maintaining an optimal glide path which runs down to an aiming point inside an aiming zone (Figure 1) while also maintaining the right aircraft flight parameters [26,27]. Since a vision system is assumed as the main source of data for the aircraft’s position on the landing trajectory, some kind of ground signs, being a reference point indicating the aiming zone as well as the runway, must be defined. In cases of landing at well-equipped runways, classical runway signs or lights can be used. Unfortunately, in cases of operation at runways not equipped with any lights or ground signs—e.g., grass runways or even plain grasslands—an additional easy-to-install ground sign system contrasting with the immediate vicinity should be considered [28,29] (Figure 1). The solution discussed in the paper assumes the use of dedicated ground signs whose picture is processed by an onboard image processing system to provide feedback data to the autopilot regarding the aircraft position on the desired landing trajectory. There are Aerospace 2021, 8, 167 4 of 21 two circles with radius R = 2 m, located at the runway threshold in the direction along the runway, in yellow and red colors, used as a dedicated sign system (Figure 1). Colors were selected to increase the contrast between the runway and its surroundings. Dimensions are set experimentally to make signs visible to the employed vision system from a height of 120 m—the height recognized as the proper height at which the final approach should start for the type of aircraft used in the experiments (Figure 2), plus a 20% safety margin. An aiming point, the point at which the virtual glide path touches the runway, is the center of the yellow circle. The configuration of the circles (yellow first) defines the right approach direction. Figure 2. The experimental airplane used in research. (a) General overview and features; (b) orientation of the optical axis of camera in relation to the aircraft. 3. The Experimental Aircraft The system presented in the paper is developed for the MINI class of unmanned airplanes in general. Therefore, such aircraft was also selected as a control plant and for in-flight test campaigns (Figure 2). The aircraft was equipped with an automatic flight control system offering a set of core, low-level flight control functions capable of stabilizing at the desired spatial attitude [30,31]. This cooperates with the tested master systems, executing desired functions or forwarding control signals produced by the master system directly to actuators. In addition, it can record a number of flight parameters. Dynamics of the aircraft in a configuration for the final approach and landing were T T found and linearized. For a state vector X = [u w q q] and control inputs [d d ] , a state E T matrix A and input matrix B are as follows (1). 2 3 2 3 0.14 0.26 0.09 9.81 0.005 5.06 6 7 6 7 1.26 12.70 15.10 0.05 8.80 0 6 7 6 7 A = , B = , (1) 4 5 4 5 0.13 7.21 6.80 0.0005 105.0 4.6 0 0 1 0 0 0 where: u-airspeed along aircraft X axis in m/s, w-airspeed along aircraft Z axis in m/s, q B B —pitch rate in 1/s, q —pitch angle in radians, d —elevator position in radians, d —thrust E T in percentages [32]. On the basis of aerodynamic, mass, geometrical, and other characteristics of the aircraft, a model was built in the simulation environment and then used in final simulations. The aircraft was also equipped with a vision system capable of analyzing the picture of the ground sign system (Figures 1 and 2) appearing on an on-board video camera (Figure 2) The camera was mounted in the aircraft’s vertical plane of symmetry with the Aerospace 2021, 8, 167 5 of 21 angle between the aircraft’s X axis and its optical axis z = 0 ., identified and set on the basis of manual landings during flight tests. This aircraft was also used for sensor flights in order to obtain information for the synthesis of the control rules and to get pictures for the image processing algorithms. Models of its dynamics were used in the synthesis of the control algorithms and in final real-time simulations. According to the approach adopted in this paper the selection of a specific type of aircraft is not critical for the structure of the developed algorithms. However, the selected aircraft’s specific features determine the numerical values of the coefficients and parameters appearing in the developed rules and algorithms. 4. Structure of the System A general structure of the developed flight control system is shown in Figure 3. A similar one was previously applied to control MALE class aircraft on the glide path with the use of PAPI light indications [33]. It is composed of a classical measurement system, an autopilot [30,31], and two extra modules expanding its basics functions: Vision System—provides information on the aircraft’s vertical position relative to the selected glide path ; this information results from processing of the ground signs image appearing on the on-board camera. Automatic Landing System—controls the aircraft with the use of fuzzy logic and expert knowledge-based control algorithms. It also uses some specific autopilot spatial attitude and speed stabilization functions, or utilizes it to control actuators directly, to maintain the desired glide path. Figure 3. The functional structure of the classical autopilot control system extended with both Automatic Landing System and Vision System. q—pitch angle; q—pitch rate; U—airspeed; dH/dt—vertical speed; H—height above the ground; q — desired pitch angle; d , d —control signals, elevator position and, throttle lever position respectively; d —vertical deviation E T V from the desired glide path. Adapted from [33]. The Automatic Landing System is a master control module to the autopilot. It manages the landing from the initial moment, when the aircraft gets on the desired approach path, through the flare phase, to touchdown at the end. It engages the autopilot’s pitch angle and airspeed channels. An output, desired pitch angle q , is to be stabilized by the autopilot in the pitch angle channel. However, in the airspeed channel and because of its criticality and specific features, the throttle d is directly controlled. The control algorithms of the Automatic Landing System rely on the set of aircraft flight state data provided by the standard measurement devices, as well as on the Vision System outputs. In particular, information on the aircraft’s relative position to the desired glide path, supported by classical sensors and systems, is utilized to control the aircraft Aerospace 2021, 8, 167 6 of 21 on the vertical trajectory, so as to maintain it correctly until the flare phase. It should also make it possible to conduct the flare and touchdown maneuvers safely when visual signals are not readily available. 5. Determination of Aircraft Position Vertical Deviation from the Selected Glide Path During landing, pilots of manned aircraft observe the runway and runway signs—if present—constantly and very carefully. In this way, they are able to assess actual deviation from the required glide path angle [26,27,34–37]. It is assumed, that the developed system should be capable of estimating a deviation from the desired landing trajectory using of the picture of the runway, marked with its signs, just as human pilots do. The goal of the vision system described in this work is to detect and interpret the position of dedicated ground signs (Figure 1) on the picture appearing on the on-board camera and feed the autopilot with any detected deviation from the predefined glide path. It is assumed that the aircraft fixes on the aiming point, keeping it in the center of the picture that together with airspeed stabilization at the desired approach airspeed provides the capability of proper glide path angle control. In this way, the vertical position of signs on the recorded picture is indirectly linked with the vertical deviation of the aircraft’s position from the desired glide path. Thus, the entire process is based on the identification of an image of dedicated signs in relation to the center of the gathered picture (Figure 4). Figure 4. Samples of pictures gathered by the on-board camera with marked dedicated runway signs system; (a) incorrect configuration; (b) correct configuration-aircraft is aiming the signs system. Crossed dash-dot lines mark the center of the image and the real aiming point. The ground signs system is revealed on a video image by extraction of red and yellow elliptical areas, highly contrasted to the background This is achieved by a specific sequence of standard mathematical operations with pixel arrays [38] (Figure 5). This assumes that the vertical dimension of the picture D is given in pixels, the pixel having (0,0) coordinates matches the left upper corner of the picture, and d is the vertical deviation of the signs system from the center of the picture as a percentage of the image height. A raw picture captured by an on-board video camera containing both red and yellow circles is first transformed from the RGB to the HSV model of colors (Figure 6a) [39]. Next, a sixfold binarization (for each red and yellow component) with double thresholding H, S, V components for yellow and red is defined by (2). The first three operations extract objects in red, and the next three extract objects in yellow. The results of these transformations are six separate monochromatic pictures merged into two final pictures (separately for yellow and red) by binary operation for each pixel (Figure 6b). Aerospace 2021, 8, 167 7 of 21 Figure 5. The image processing algorithm for a single frame of the picture determining the position of ground signs. H, S, V—pixel color components for yellow (y) and red (r) circles; M—number of identified elliptical objects; P (x , y ), P (x , y )—position of respectively red and yellow circle on the r r r y y y picture in pixels; d —vertical deviation of the signs from the aiming point system from the center of the picture. Aerospace 2021, 8, 167 8 of 21 255 f or F (x, y) = 255 ^ F (x, y) = 255 ^ F (x, y) = 255 Hr Sr Vr F (x, y) = Rbin 0 f or F (x, y) = 0 _ F (x, y) = 0 _ F (x, y) = 0 Hr Sr Vr 255 f or F (x, y) = 255 ^ F (x, y) = 255 ^ F (x, y) = 255 Hy Sy Vy (2) F (x, y) = Ybin 0 f or F (x, y) = 0 _ F (x, y) = 0 _ F (x, y) = 0 Hy Sy Vy 255 f or F (x, y) = 255 _ F (x, y) = 255 Rbin Ybin F (x, y) = bin 0 f or F x, y = 0 ^ F x, y = 0 ( ) ( ) Rbin Ybin where x, y—coordinates of the single pixel, F , F , F —monochromatic pictures for Hr Sr Vr binarization H, S, V components for red color; F , F , F —monochromatic pictures Hy Sy Vy for binarization H, S, V components for yellow color; F , F , F —monochromatic Rbin Ybin bin pictures representing objects in red, yellow and red-yellow respectively. Figure 6. (a) Picture converted into HSV model of color. (b) Post-binary-operation monochromatic picture for yellow circle. Erosion with square kernel 8 8 pixels removes small artefacts from the picture, making it more unambiguous and more convenient for the next step of processing: namely finding the elliptical shape contours in both red and yellow. The single contour of the sign is defined in the form of the vector of points [38,40]. Geometric zero-zero order m and m moments of areas inside counters (3) [41] r 0,0 y 0,0 make it possible to find two of them having the largest red and yellow areas respectively. p q m = F(x , y )x y (3) p,q å i i i=1 where: m —geometric contour of p, q order; i—number of pixels; x , y —coordinates of p,q i i i-th pixel. The next phase concentrate on calculating the coordinates of the geometric center point P for each sign detected on the picture (4) [41]. m m 10 01 P = (x, y); x = , y = (4) m m 00 00 where: x, y—coordinates of the centers of the objects; m , m m —relevant geometric 00 10 01 moment respectively for the largest red and yellow objects. The condition y > y (Figure 7) checks the correctness of the flight direction. The y r assumed correct direction causes the yellow sign to be closer to the aircraft than the red one, so it should be located lower on the picture. Aerospace 2021, 8, 167 9 of 21 Figure 7. (a) Identified and contoured ground signs system appearing to the on-board vision system with marked centers of circles; (b) yellow circle’s relative positon d to the horizontal center of the picture, D is the height of the picture in pixels. V V 6. Automatic Landing Algorithms The automatic landing algorithms hosted by the Automatic Landing System (Figure 3) make the execution of the landing phase possible from the moment the aircraft is on the desired approach path until it touches the runway. They are designed to imitate an UAV operator ’s activity and should be able to land the aircraft relying on the same set of data. Thus, the method a human uses to control the plane is a key driver for defining the set of system input signals [42]. The implemented algorithms are a Mamdani-type fuzzy expert system. It is composed of fuzzy sets describing the values input and outputs take, a set of rules defining dependencies between inputs and outputs and logical conditions managing when specific rules are active. It has three inputs assigned to three specific aircraft flight parameters. The first and most crucial is the ground sign system related position on the image appearing to the on-board camera d -with desired value set to zero. It is reduced by the use of the desired pitch angle simultaneously supported by the thrust control, in the way defined by rules from R.1 to R.12. Thus, d signal is used by the Automatic Landing System as a control error. The second are two supplementary, but important inputs from the point of view of the entire process: height above the runway H, and true airspeed U. There are also two outputs from the system: desired pitch angle q and the throttle control signal d . Using the information about airplane control during landing applied by the human, a linguistic space as well as the linguistic values it contains (i.e., fuzzy sets) were defined for all linguistic variables: linguistic spaces of the input variables: X (vertical deviation of yellow circle from the center of the picture) = {extremely dV low, very low, low, correct, high, very high, extremely high}, X (height above the runway) = {very low, low, high}, X (airspeed) = {very low, low, correct, high, very high}, linguistic spaces of the output variables: X (desired pitch angle) = {negative big, negative small, negative, zero, positive, positive small, positive big}, X (desired throttle position) = {very small, small, medium, big, very big}. The next step in designing the expert system was the assignment of the membership functions of linguistic variables to fuzzy sets [43,44]. Triangular, trapezoidal typical forms of membership functions were used for some practical reasons, facilitating the final tuning of the control algorithms. For each fuzzy set, values of membership function parameters were determined based on measurement data recorded during real flights performed by the Aerospace 2021, 8, 167 10 of 21 operator as well as on information obtained from experts (skilled in manual remote-control UAV operations) [45]. In the first step, the vertical deviation of the yellow sign from the center of the picture d and the height of the plane above the runway H were linked to proper fuzzy sets— Figures 8 and 9 respectively. Figure 8. Membership degree of the input variable d vertical deviation (control error in this case) of the yellow circle from the center of the picture, associated with fuzzy sets: extremely low, very low, low, correct, high, very high, extremely high. To be maintained as correct is to be led to zero. Figure 9. Membership degree of the input variable height H associated with fuzzy sets: very low, low, high. Due to both the ground proximity and the relatively low flight speed, it was necessary to pay increased attention in order to maintain the proper airspeed all the time. For this reason, the control algorithms also take into account the airspeed U associated with the fuzzy sets: very low, low, correct, high, very high (Figure 10). The label “correct” is associated with the correct final approach airspeed defined for the aircraft used in the experiments (Figure 2). It results from aircraft mass, configuration, and aerodynamic characteristic and is recognized as safe for all actions during the approach. Aerospace 2021, 8, 167 11 of 21 Figure 10. Membership degree of the input variable airspeed U associated with fuzzy sets: very low, low, correct, high, very high. There are two output signals from the automatic landing algorithms controlling the aircraft (Figure 3) desired pitch angle q (Figure 11) maintained by the autopilot, and d d T directly controlling the throttle lever position (Figure 12). Figure 11. Membership degree of the output variable desired pitch angle q associated with fuzzy sets: positive big, positive, positive small, zero, negative small, negative, negative big. A number of the fuzzy sets selected for each linguistic variable and their widths result from expert knowledge supported with data recorded during test flights. Ranges of both input and output data were associated with fuzzy sets in such a way as to obtain the appropriate granularity (accuracy) of the information, securing the satisfactory proper utilization of expert knowledge and the precision of the control process [44,46,47]. A fuzzy rule base for an expert system contains fourteen principles defining the aircraft control strategy during particular phases of the landing (Figure 13). Despite the fact that the study focuses on the use of visual signals by control rules which are fully feasible only on the approach segment, the next segment, not being the main objective of this research, is also considered in the rules only so as to make the landing complete. Aerospace 2021, 8, 167 12 of 21 Figure 12. Membership degree of the output variable throttle lever position associated with fuzzy sets: very small, small, medium, big, very big. Figure 13. Rules assignment to specific phases of the landing. It is worth noting that the rules were not generated automatically. Consequently, it is likely that if the set was analyzed literally, some formal cases would not be covered by them. However, when looking at the expert system as a whole, these cases are indirectly covered by other rules, or are not feasible at all. It is assumed that the system can only work if the following conditions are met: the system can be activated if the aircraft is in a landing configuration on the glide path; ground signs are located no further than 25% of the picture height from the center of the picture, which indirectly predefines the real path angle to be maintained; ground signs are visible and identifiable until the flare phase; only normal procedures are supported; no emergency situations are considered. Rules from R.1 to R.7 are responsible for keeping the airplane on the approach path aiming the signs system when the flight height H above the ground is greater than the defined height of the flare phase. They produce the desired pitch angle q which should be slightly negative to maintain the proper sink rate when the aircraft follows the glide path correctly. When the aircraft is under or over the desired trajectory, it should be more positive or more negative, respectively, to assure that the aircraft will soon follow the approach path. Aerospace 2021, 8, 167 13 of 21 R.1 IF (d IS correct) AND (H IS high) THEN (q IS negative small) V d R.2 IF (d IS high) AND (H IS high) THEN (q IS negative) R.3 IF (d IS very high) AND (H IS high) THEN (q IS negative) V d R.4 IF (d IS extremely high) AND (H IS high) THEN (q IS negative big) V d R.5 IF (d IS low) AND (H IS high) THEN (q IS zero) V d R.6 IF (d IS very low) AND (H IS high) THEN (q IS zero) V d R.7 IF (d IS extremely low) AND (H IS high) THEN (q IS zero) V d Rules R.8 to R.12 control the airspeed. They are meant to maintain a desired (constant in an ideal case) approach airspeed, regardless of the actual pitch angle. They are active during the approach phase, when the height above the ground is greater than the assumed flare height, and the real aiming point, i.e., ground signs, appears to the camera. If the airspeed U is adequate, i.e., equal to the predefined aircraft approach airspeed—16 m/s (58 km/h), the expert system sets the throttle lever to a position between the middle and the minimum setting. At a negative pitch angle, it allows the aircraft to continue the flight with the appropriate approach airspeed. If the aircraft flies too fast or too slowly, the system decreases or increases the thrust. R.8 IF (d IS NOT low below path) AND (U IS correct) AND (H IS NOT very low) THEN (d IS small) R.9 IF (d IS NOT low below path) AND (U IS small) AND (H IS NOT very low) THEN (d IS medium) R.10 IF (d IS NOT low below path) AND (U IS high) AND (H IS NOT very low) THEN (d V T IS very small) R.11 IF (d IS NOT low below path) AND (U IS small) AND (H IS NOT very low) THEN (d IS very big) R.12 IF (d IS NOT low below path) AND (U IS high) AND (H IS NOT very low) THEN (d V T IS very small) Rules R.1 to R.12 are active in the basic/approach segment, and at this time use the available visual signal d as one of their inputs. When the flare height is reached, visual signals are no longer considered because they are not visible. The aircraft is moved nose up to reduce the sink rate, so the signs move down beyond the frame of the camera. The information on the aircraft’s relative position to the glide path becomes unreliable at the beginning of this phase and totally unavailable at the end of it. Thus, because of the differences in the nature of the approach and flare phases, the set of control rules must be exchanged. The rules controlling the approach are deactivated, and the rule R.13 is triggered. This rule doesn’t request any information from the Vision System, slowly increases the pitch angle (to small positive values), and retracts the throttle lever to the minimum position. These actions imitate the operator ’s activity during the flare maneuver. It causes a reduction of the sink rate and a gradual reduction of the airspeed, leading the aircraft as close to the touchdown as possible [48]. R.13 IF (H IS very low) THEN (q IS negative small) (d IS very small) Once the aircraft is very near the ground signs, and they start moving beyond the frame of the camera the control is given to the rule R.14. Its task is to continue maintaining the pitch angle to reduce the sink rate to a safe value. R.14 IF (H IS very low) THEN (q IS zero) (d IS very small) d T 7. Simulation Results The issues theoretically discussed so far were then checked and verified in practical tests. The tests of the Vision System were intended to analyze the work of the algorithm in determining, the vertical deviation of the aircraft from the desired flight path. The tests of the Automatic Landing System were intended to verify the control algorithms guiding the aircraft along the glide path with the use of information about deviations obtained from the Vision System. Software in the Loop (SIL) and Hardware in the Loop (HIL) tests, supported by flight tests, were performed. For this purpose, a Cularis aircraft simulation model, Aerospace 2021, 8, 167 14 of 21 target autopilot, Embedded Vision System, and Automatic Landing System modules with target software whose key functions were automatically generated using Matlab/Simulink v.2016b were integrated into laboratory rigs [30]. 7.1. Tests of Ground Signs Indication and Interpretation Algorithms The Vision System (Figure 3) was tested individually at the beginning of this stage. Its hardware features the following as well as video camera performances of the following components: embedded computer—based on a quad-core processor, with 256-core GPU graphics chip, managed by a real-time operating system (including built-in compiler, debugger and system libraries containing basic functions and algorithms for image processing and calculations support), quad-core ARM processor with a 256-core GPU graphics chip that allows parallel computing (Nvidia Jetson Nano TX1 from Nvidia, Santa Clara, CA, computer based on NVIDIA Maxwell architecture was used as a hardware platform, Linux Ubuntu 16.04 LTS operating system); video camera—4K resolution type, 155 degrees view angle, with picture stabilization mechanism, adapted to variable lighting conditions (FPV Legend 3 from Foxeer, Shenzhen, China), affected the test configuration and campaign scenarios. Software realizing image processing algorithms offered all the functions necessary to extract essential information from the image. These functions identified objects on video frames, generated information about their location, and provided those data for further calculations. Image processing methods for this purpose were supported by a computer vision methods and techniques, contained in the OpenCV programming library [38]. Low- level library functions that perform basic operations on images were not developed as part of the research. The goal was only to use them as “bricks” in the development of algorithms interpreting indicated ground signs. The software analyzing the ground signs image was tested in three steps, as shown below, i.e., elements of desktop simulations, SIL/HIL tests and real flights were successively merged (Figure 14.): Figure 14. The approach adopted to test the kernel functions of the Vision System. 1. The operator landed a Cularis aircraft (Multiplex Modellsport GmbH & Co.KG, Bretten, Germany), during dedicated real flights, and a movies containing ground signs image in *.avi format was recorded. 2. The Vision System was fed with the movie off-line, but the real time of calculations was maintained. 3. Information about the deviation , being a system output, was compared and verified with the manual calculations. 7.2. Tests of the Control Algorithms The next stage of the tests focused on verifying, whether data generated by the image processing algorithm hosted by the Vision System is sufficient for the control algorithms Aerospace 2021, 8, 167 15 of 21 to guide the aircraft to the flare phase safely and possibly land successfully. Because both pieces of the system are mutually dependent, it is obvious that the control algorithms might not be tested in a separation from the image-processing component. An overall assessment of the combined work of the Vision System and the Automatic Landing System took into consideration the correct execution of all phases. Special attention was paid to stabilization of airspeed, vertical deviation from the desired glide path and vertical speed (sink rate) during approach. During flare and touchdown (because those are the only supplementary phases for the final approach and the main objectives of the research) those flight parameters were not in such priority. The assessment was composed of subjective expert’s opinions taking into consideration all specific phases, firstly the overall impression of the entire landing and secondly, some typical indexes applicable for a control system (e.g., ranges of selected flight parameters vary, dynamics of these variations). Testing began with SIL type tests conducted with a dedicated rig [30,31] (Figure 15) consisting of the following components: computer hosting software functions of picture processing and fuzzy control algo- rithms. As a video input, a USB interface was used. As an output, an RS-232 interface for the transmission of calculated deviations of the aircraft position from the glide path was applied; autopilot—an autopilot unit developed by Avionics and Control Department. of Rzeszow University of Technology for the Cularis aircraft; flight simulator software used for the simulation of aircraft dynamics and generation of the picture of the runway; video camera closing the loop. This acquired the picture of the runway and transmitted it to the Vision System; data recorder connected to CAN databus. Figure 15. SIL rig configuration for testing picture processing and control algorithms. Adapted from [33]. The main feature of that particular campaign was to verify the work of major software functions automatically generated from MATLAB/Simulink models and then downloaded onto the simulating computer. These functions were intended to be downloaded into target hardware in further steps. A HIL test campaign was the next step in testing, carried out with the use of a modified SIL rig (Figure 16). The simulating computer was replaced with two target hardware units: Vision System with software realizing picture processing algorithms. As a video input, a CSI-2 interface was used, and as an output, an RS-232 for transmission of identified vertical deviation of the aircraft position from the desired glide path were applied; Automatic Landing System with fuzzy logic algorithms capable of controlling the aircraft during the landing phase. This generated the desired pitch angle, stabilized by a real autopilot unit and the desired throttle level position. Aerospace 2021, 8, 167 16 of 21 Figure 16. HIL rig configuration for testing picture processing and control algorithms. Adapted from [33]. Due to the basic feature of HIL type tests, i.e., usage of real hardware modules with real software, the quality of results is increased and is comparable with real systems. Figures 17–22 present results achieved during one selected, sample test scenario. The system was activated when the aircraft reached a height of 125 m above runway level—the initial height of the approach in distance, resulting in the glide path angle recognized as proper (in the range from 4 deg. to 6 deg.), with the requested approach speed (from 14 m/s to 17 m/s. These parameters were set according to expert knowledge about the flight characteristics of the aircraft gathered in dedicated flights (Figure 17). Figure 17. The vertical profile of the aircraft flight trajectory. Figure 18. Pitch angle in all phases. Aerospace 2021, 8, 167 17 of 21 Figure 19. Vertical deviation of the position of the aiming point from the center of the picture given in the range <