Access the full text.
Sign up today, get DeepDyve free for 14 days.
N. Correal, S. Kyperountas, Qicai Shi, M. Welborn (2003)
An UWB relative location systemIEEE Conference on Ultra Wideband Systems and Technologies, 2003
M. Win, R. Scholtz (1998)
Impulse radio: how it worksIEEE Communications Letters, 2
A. Molisch, Chia-Chin Chong, Shahriar (2006)
A Comprehensive Standardized Model for Ultrawideband Propagation ChannelsIEEE Transactions on Antennas and Propagation, 54
Yu Gu, B. Seanor, G. Campa, M. Napolitano, L. Rowe, S. Gururajan, Sheng Wan (2006)
Design and Flight Testing Evaluation of Formation Control LawsIEEE Transactions on Control Systems Technology, 14
P. Conroy, Daman Bareiss, M. Beall, J. Berg (2014)
3-D Reciprocal Collision Avoidance on Physical Quadrotor Helicopters with On-Board Sensing for Relative PositioningArXiv, abs/1411.3794
M. Schwager, Brian Julian, D. Rus (2009)
Optimal coverage for multiple hovering robots with downward facing cameras2009 IEEE International Conference on Robotics and Automation
N Michael, D Mellinger, Q Lindsey, V Kumar (2010)
The GRASP Multiple Micro-UAV Test Bed: Experimental evaluation of multirobot aerial control algorithmsIEEE Robotics & Automation Magazine, 17
D. Neirynck, Eric Luk, Michael McLaughlin (2016)
An alternative double-sided two-way ranging method2016 13th Workshop on Positioning, Navigation and Communications (WPNC)
Maximilian Kriegleder, Sundara Digumarti, Raymond Oung, R. D’Andrea (2015)
Rendezvous with bearing-only information and limited sensing range2015 IEEE International Conference on Robotics and Automation (ICRA)
Xun Zhou, S. Roumeliotis (2008)
Robot-to-Robot Relative Pose Estimation From Range MeasurementsIEEE Transactions on Robotics, 24
Matthew Turpin, Nathan Michael, Vijay Kumar (2012)
Decentralized formation control with variable shapes for aerial robots2012 IEEE International Conference on Robotics and Automation
S. Hauert, S. Leven, M. Varga, Fabio Ruini, A. Cangelosi, J. Zufferey, D. Floreano (2011)
Reynolds flocking in reality with fixed-wing robots: Communication range vs. maximum turning rate2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
A. Werner, W. Stürzl, J. Zanker (2016)
Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?PLoS ONE, 11
Timothy Stirling, James Roberts, J. Zufferey, D. Floreano (2012)
Indoor navigation with a swarm of flying robots2012 IEEE International Conference on Robotics and Automation
D. Roetenberg, C. Baten, P. Veltink (2007)
Estimating Body Segment Orientation by Applying Inertial and Magnetic Sensing Near Ferromagnetic MaterialsIEEE Transactions on Neural Systems and Rehabilitation Engineering, 15
M. Coppola, K. McGuire, Kirk Scheper, G. Croon (2016)
On-board communication-based relative localization for collision avoidance in Micro Air Vehicle teamsAutonomous Robots, 42
I. Couzin, N. Franks (2003)
Self-organized lane formation and optimized traffic flow in army antsProceedings of the Royal Society of London. Series B: Biological Sciences, 270
M. Saska, Vojtěch Vonásek, J. Chudoba, Justin Thomas, Giuseppe Loianno, Vijay Kumar (2016)
Swarm Distribution and Deployment for Cooperative Surveillance by Micro-Aerial VehiclesJournal of Intelligent & Robotic Systems, 84
M. Afzal, V. Renaudin, G. Lachapelle (2011)
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic PerturbationSensors (Basel, Switzerland), 11
N. Roy, P. Newman, S. Srinivasa (2013)
Towards A Swarm of Agile Micro Quadrotors
Cheng Hui, Yousheng Chen, Wong Shing (2014)
Trajectory tracking and formation flight of autonomous UAVs in GPS-denied environments using onboard sensingProceedings of 2014 IEEE Chinese Guidance, Navigation and Control Conference
James Roberts, Timothy Stirling, J. Zufferey, D. Floreano (2012)
3-D relative positioning sensor for indoor flying robotsAutonomous Robots, 33
(2005)
Swarm Robotics, SAB 2004 International Workshop, Santa Monica, CA, USA, July 17, 2004, Revised Selected Papers, 3342
Steven Quintero, G. Collins, J. Hespanha (2013)
Flocking with fixed-wing UAVs for distributed sensing: A stochastic optimal control approach2013 American Control Conference
K. Macdonald, M. Thomas, A. Sciolla, Beacher Schneider, K. Pappas, G. Bleijenberg, M. Bohus, Bradley Bekh, L. Carpenter, A. Carr, U. Dannlowski, M. Dorahy, C. Fahlke, R. Finzi-Dottan, Tobi Karu, A. Gerdner, H. Glaesmer, H. Grabe, M. Heins, D. Kenny, Daeho Kim, H. Knoop, J. Lobbestael, C. Lochner, Grethe Lauritzen, E. Ravndal, Shelley Riggs, V. Şar, I. Schäfer, N. Schlosser, M. Schwandt, M. Stein, C. Subic-Wrana, M. Vogel, K. Wingenfeld (2016)
Minimization of Childhood Maltreatment Is Common and Consequential: Results from a Large, Multinational Sample Using the Childhood Trauma QuestionnairePLoS ONE, 11
R. Hermann, A. Krener (1977)
Nonlinear controllability and observabilityIEEE Transactions on Automatic Control, 22
L. Merino, F. Caballero, J. Dios, J. Melero, A. Ollero (2006)
A cooperative perception system for multiple UAVs: Application to automatic detection of forest firesJournal of Field Robotics, 23
Manuele Brambilla, E. Ferrante, M. Birattari, M. Dorigo (2013)
Swarm robotics: a review from the swarm engineering perspectiveSwarm Intelligence, 7
Agostino Martinelli, R. Siegwart (2005)
Observability analysis for mobile robot localization2005 IEEE/RSJ International Conference on Intelligent Robots and Systems
Xuebing Yuan, Shuai Yu, Shengzhi Zhang, Guoping Wang, Sheng Liu (2015)
Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor SystemSensors (Basel, Switzerland), 15
R. Beard, T. McLain (2003)
Multiple UAV cooperative search under collision avoidance and limited range communication constraints42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), 1
J. Foerster (2001)
Ultra-Wideband Technology for Short- or Medium-Range Wireless Communications
A. Hayes, Parsa Dormiani-Tabatabaei (2002)
Self-organized flocking with agent failure: Off-line optimization and demonstration with real robotsProceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), 4
Hui Liu, H. Darabi, Pat Banerjee, J. Liu (2007)
Survey of Wireless Indoor Positioning Techniques and SystemsIEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37
A. Hayes, A. Martinoli, R. Goodman (2003)
Swarm robotic odor localization: Off-line optimization and validation with real robotsRobotica, 21
Alex Kushleyev, Daniel Mellinger, Caitlin Powers, Vijay Kumar (2013)
Towards a swarm of agile micro quadrotorsAutonomous Robots, 35
Alejandro Cornejo, R. Nagpal (2014)
Distributed Range-Based Relative Localization of Robot Swarms
Thien-Minh Nguyen, Abdul Zaini, Kexin Guo, Lihua Xie (2016)
An Ultra-Wideband-based Multi-UAV Localization System in GPS-denied environments
G. Vásárhelyi, Csaba Virágh, G. Somorjai, N. Tarcai, T. Szörényi, T. Nepusz, T. Vicsek (2014)
Outdoor flocking and formation flight with autonomous aerial robots2014 IEEE/RSJ International Conference on Intelligent Robots and Systems
Arjun Iyer, Luis Rayas, A. Bennett (2013)
Formation control for cooperative localization of MAV swarms (demonstration)
Yash Mulgaonkar, Gareth Cross, Vijay Kumar (2015)
Design of small, safe and robust quadrotor swarms2015 IEEE International Conference on Robotics and Automation (ICRA)
D. Roetenberg, H. Luinge, C. Baten, P. Veltink (2005)
Compensation of magnetic disturbances improves inertial and magnetic sensing of human body segment orientationIEEE Transactions on Neural Systems and Rehabilitation Engineering, 13
M. Afzal, V. Renaudin, G. Lachapelle (2010)
Assessment of Indoor Magnetic Field Anomalies using Multiple Magnetometers
Jacqueline Degen, A. Kirbach, Lutz Reiter, Konstantin Lehmann, Philipp Norton, Mona Storms, Miriam Koblofsky, Sarah Winter, Petya Georgieva, Hai Nguyen, Hayfe Chamkhi, H. Meyer, Pawan Singh, Gisela Manz, U. Greggers, R. Menzel (2016)
Honeybees Learn Landscape Features during Exploratory Orientation FlightsCurrent Biology, 26
E. Sahin (2004)
Swarm Robotics: From Sources of Inspiration to Domains of Application
Steven Roelofsen, D. Gillet, A. Martinoli (2015)
Reciprocal collision avoidance for quadrotors using on-board visual detection2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Nathan Michael, Daniel Mellinger, Quentin Lindsey, Vijay Kumar (2010)
Experimental Evaluation of Multirobot Aerial Control Algorithms
Markus Achtelik, M. Achtelik, Y. Brunet, M. Chli, S. Chatzichristofis, J. Decotignie, K. Doth, F. Fraundorfer, L. Kneip, Daniel Gurdan, Lionel Heng, E. Kosmatopoulos, L. Doitsidis, Gim Lee, Simon Lynen, Agostino Martinelli, Lorenz Meier, M. Pollefeys, D. Piguet, A. Renzaglia, D. Scaramuzza, R. Siegwart, J. Stumpf, Petri Tanskanen, C. Troiani, S. Weiss (2012)
SFly: Swarm of micro flying robots2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
Xiaomeng Li, Qiuzhan Zhou, Shaofang Lu, Hao Lu (2006)
A New Method of Double Electric Compass for Localization in Automobile Navigation2006 International Conference on Mechatronics and Automation
Kexin Guo, Zhirong Qiu, W. Meng, Lihua Xie, R. Teo (2017)
Ultra-wideband based cooperative relative localization algorithm and experiments for multiple unmanned aerial vehicles in GPS denied environmentsInternational Journal of Micro Air Vehicles, 9
M. Saska, Jan Vakula, L. Preucil (2014)
Swarms of micro aerial vehicles stabilized under a visual relative localization2014 IEEE International Conference on Robotics and Automation (ICRA)
Tobias Naegeli, C. Conte, A. Domahidi, M. Morari, Otmar Hilliges (2014)
Environment-independent formation flight for micro aerial vehicles2014 IEEE/RSJ International Conference on Intelligent Robots and Systems
Soon Chiew, Weihua Zhao, Go Hiong (2015)
Swarming Coordination with Robust Control Lyapunov Function ApproachJournal of Intelligent & Robotic Systems, 78
Nathan Michael, Daniel Mellinger, Quentin Lindsey, Vijay Kumar (2010)
The GRASP Multiple Micro-UAV TestbedIEEE Robotics & Automation Magazine, 17
M. Schwager, J. McLurkin, J. Slotine, D. Rus (2008)
From Theory to Practice: Distributed Coverage Control Experiments with Groups of Robots
E. Smeur, Q. Chu, G. Croon (2016)
Adaptive Incremental Nonlinear Dynamic Inversion for Attitude Control of Micro Air VehiclesJournal of Guidance Control and Dynamics, 39
We present a range-based solution for indoor relative localization by micro air vehicles (MAVs), achieving sufficient accuracy for leader–follower flight. Moving forward from previous work, we removed the dependency on a common heading mea- surement by the MAVs, making the relative localization accuracy independent of magnetometer readings. We found that this restricts the relative maneuvers that guarantee observability, and also that higher accuracy range measurements are required to rectify the missing heading information, yet both disadvantages can be tackled. Our implementation uses ultra wideband, for both range measurements between MAVs and sharing their velocities, accelerations, yaw rates, and height with each other. We showcased our implementation on a total of three Parrot Bebop 2.0 MAVs and performed leader–follower flight in a real-world indoor environment. The follower MAVs were autonomous and used only on-board sensors to track the same trajectory as the leader. They could follow the leader MAV in close proximity for the entire durations of the flights. Keywords Relative localization · Leader–follower · Micro air vehicles · Autonomous flight · Indoor 1 Introduction cooperating swarm of MAVs could execute tasks faster than any single MAV. The envisioned applications of such multi- Swarm robotics offers to make micro air vehicle (MAV) agent robotic systems are plentiful. Examples of interest are: applications more robust, flexible, and scalable (Sahin ¸ 2005; cooperative surveillance and/or mapping (Saska et al. 2016; Brambilla et al. 2013). These properties pertain to a group’s Schwager et al. 2009a; Achtelik et al. 2012), localization of ability to remain operable under loss of individual mem- areas of sensory interest (e.g. chemical plumes) (Hayes et al. bers and to reconfigure for different missions. Furthermore, a 2003; Schwager et al. 2009b), the detection of forest fires (Merino et al. 2006), or search missions in hazardous envi- This is one of the several papers published in Autonomous Robots ronments (Beard and McLain 2003). In order to deploy a comprising the Special Issue on Multi-Robot and Multi-Agent team of MAVs for such applications, there are certain behav- Systems. iors that the MAVs should be capable of, such as collision B Mario Coppola avoidance (Coppola et al. 2018; Roelofsen et al. 2015)or m.coppola@tudelft.nl leader–follower/formation flight (Vásárhelyi et al. 2014;Hui Steven van der Helm et al. 2014;Guetal. 2006). These tasks are accomplished by stevenhelm@live.nl the MAVs through knowledge of the relative location of (at Kimberly N. McGuire least) the neighboring MAVs in the group, for which several k.n.mcguire@tudelft.nl solutions can be found in literature. GuidoC.H.E.deCroon Often used are external systems that provide a global g.c.h.e.decroon@tudelft.nl reference frame within which agents can extract both their own and the other MAVs’ position. One example is (MCSs) Department of Control and Simulation (Micro Air Vehicle (Schwager et al. 2009b; Mulgaonkar et al. 2015; Kushleyev Laboratory), Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629HS Delft, The et al. 2013; Michael et al. 2010; Turpin et al. 2012;Chiew Netherlands et al. 2015; Hayes and Dormiani-Tabatabaei 2002). MCSs Department of Space Systems Engineering, Faculty of provide highly accurate location data, but only within the lim- Aerospace Engineering, Delft University of Technology, ited coverage provided by the system. Alternatively, (GNSS) Kluyverweg 1, 2629HS Delft, The Netherlands 123 416 Autonomous Robots (2020) 44:415–441 can be used to provide similar location data (Gu et al. 2006; Saska et al. 2016; Vásárhelyi et al. 2014; Quintero et al. 2013; Hauert et al. 2011). Although GNSS is widely available, it has relatively low accuracy if compared to MCS and there- fore large inter-MAV separation is required to guarantee safe flight (Nägeli et al. 2014). Furthermore, GNSS cannot reli- ably be used indoors due to signal attenuation (Liu et al. 2007) and can also be subject to multi-path issues in some urban environments or forests (Nguyen et al. 2016). To increase the versatility of the solution, MAVs should thus use on-board sensors to determine the locations of neigh- Fig. 1 Leader–follower flight with 3 Parrot Bebops, equipped with boring MAVs. Often, vision based methods are employed, UWB modules. By estimating and communicating their relative range such as: onboard camera based systems (Nägeli et al. 2014; (R) and ego-motion (v), follower 1 ( f )and follower 2( f ) are able to 1 2 Iyer et al. 2013; Conroy et al. 2014; Roelofsen et al. 2015), localize the leader and able to follow its trajectory with a certain time or infrared sensor systems (Kriegleder et al. 2015; Stirling delay et al. 2012; Roberts et al. 2012). A drawback of these sys- tems is that they have a limited field of view. This issue can be tackled by creating constructs with an array of sensors 2007; Afzal et al. 2011; Yuan et al. 2015), or the use of redun- (Roberts et al. 2012) or by actively tracking neighboring dant magnetic sensors to compensate the local disturbances agents (Nägeli et al. 2014) to keep them in the field of (Afzal et al. 2010;Lietal. 2006). However, a shared ref- view. The first solution introduces a weight penalty, while erence frame is not theoretically necessary for the purposes the second solution severely limits freedom of motion and of range-based relative localization (Zhou and Roumeliotis scalability as a consequence of the need for active tracking 2008; Martinelli and Siegwart 2005). of neighbors. Therefore, neither solution is ideal for MAVs. The main contribution of this paper is an analysis of the A natively omni-directional sensor would be more advanta- consequences of removing the heading dependency in range geous; one such sensor is a wireless radio transceiver. based relative localization, leading to the development and Guo et al. (2017) recently implemented an ultra wideband implementation of a heading-independent relative localiza- (UWB) radio-based system for this. Range measurements tion and tracking method that is accurate enough for full are fused with displacement information from each MAV to on-board indoor leader–follower flight, as shown in Fig. 1. estimate the relative location between MAVs. However, their The analysis is provided by a formal observability analysis method suggests that each MAV must keep track of their own and by performing limit-case simulations. Differently from displacement with respect to an initial launching point. If the work of Zhou and Roumeliotis (2008) and Martinelli and this measurement is obtained through on-board sensors (for Siegwart (2005), the analysis also considers the inclusion of example, by integrating velocities) then this measurement acceleration information, since this is commonly known by can be subject to drift over time. MAVs from their Inertial Measurement Unit (IMU). Further- Alternatively, Coppola et al. (2018) demonstrated a Blue- more, our analysis specifically focuses on the implications tooth based relative localization method. Rather than using of removing a heading dependency on the performance of displacement information, the velocities of the MAVs, the the relative localization filters and on the relative maneuvers orientation, and the height were communicated between each that the agents can perform in order to guarantee that the filter other, and the signal strength was used as a range measure- remains observable. The observability analysis will show that ment. the task of leader–follower flight is especially difficult with Despite the promising results of range-based solutions, range-based relative localization methods, because it does a drawback of the solutions by Coppola et al. (2018) and not allow for the MAVs to fly parallel trajectories. We then by Guo et al. (2017) is that the MAVs need knowledge of use the insights gathered for the development and implemen- a common frame orientation. This is established by hav- tation of a heading-independent leader–follower system that ing each MAV measure their heading with respect to North, we are able to use on-board of autonomous MAVs operat- which would be typically done with magnetometers. Magne- ing indoors. The MAVs rely only on on-board sensors, using tometers are notoriously susceptible to the local disturbances UWB for both communication and relative ranging. in the magnetic field. In indoor environments, disturbances The structure of the paper is as follows. First, in Sect. 2, upwards of 80 can occur (Afzal et al. 2010). The difficulty we compare the theoretical observability of range based rel- of establishing a reliable direction towards North in an indoor ative localization systems both with and without a reliance environment is a well known problem. Solutions are found on a common heading. The findings from Sect. 2 are verified in the form of complementary filters (Roetenberg et al. 2005, through simulation in Sect. 3, where we also evaluate the dif- 123 Autonomous Robots (2020) 44:415–441 417 ference in performance that can be expected. We carry this h. Using these definitions, an observability matrix O can be information forward in Sect. 4, where a heading-independent constructed, as in Eq. 6. system is implemented on real MAVs, and where we show ⎡ ⎤ the results of our leader–follower experiments. The results ∇⊗ L h ⎢ ⎥ are further discussed in Sect. 5. Finally, the overall conclu- ⎢ ∇⊗ L h ⎥ ⎢ ⎥ sions are drawn in Sect. 6. Future work is discussed in Sect. 7. O = ⎢ ⎥ , i ∈ N (6) ⎢ ⎥ ⎣ ⎦ ∇⊗ L h 2 Observability of the relative localization A system is locally weakly observable if the observability filter matrix is full rank. In this section, an observability analysis is performed that 2.2 Reference frames specifically focuses on the practical implications of perform- ing range based relative localization both with and without For the analyses that follow, consider the reference frames reliance on a common heading reference. Specifically, we schematically depicted in Fig. 2. Denoted by I is the Earth- will study the case where one MAV (denoted MAV 1) tracks fixed North-East-Down (NED) reference frame, which is another MAV (denoted MAV 2). Despite our focus on MAVs assumed to be an inertial frame of reference. Denoted by in particular, the conclusions that follow hold for any general H (i = 1, 2) is a body-fixed reference frame belonging to system that can provide the same sensory information. Fur- i MAV i. Its origin is coincident with MAV i’s centre of grav- thermore, the results can be extrapolated to more than two ity, and its location with respect to the I frame is represented MAVs, as will be demonstrated in Sect. 4. by the vector p . H is a horizontal frame of reference, such i i that the z-axis of the H frame always remains parallel to 2.1 Preliminaries that of the I frame. The H frame is rotated with respect to the I frame only about the positive z-axis by an angle ψ , We will conduct the analysis by studying the local weak where ψ is the heading that MAV i has with respect to North, observability of the systems (Hermann and Krener 1977). also referred to as its yaw angle. The rate of change of ψ is With an analytical test, briefly introduced in the following, represented by r . local weak observability can be used to extract whether a Note that the H frame is different from a typical body- specific state can be distinguished from other states in its fixed frame B , which uses the three Euler angles for roll, neighborhood. pitch, and yaw to represent the MAVs physical orientation Consider a generic non-linear state-space system : with respect to the I frame. Using H rather than B simplifies i i the kinematic relations without having to impose assump- x ˙ = f (x, u) (1) tions on the MAVs flight condition (e.g., being in a near-hover y = h(x) (2) state with small roll and pitch angles). The system has state vector x =[x , x ,... x ] ∈ R , 1 2 n l m an input vector u ∈ R , and an output vector y ∈ R .The vector function f (x, u) contains the definitions for the time derivatives of all the states in x and the vector function h(x) contains the observation equations for the system. The Lie derivatives of this system are: L h = h (3) 1 0 L h =∇ ⊗ L h · f (4) f f i i −1 L h =∇ ⊗ L h · f (5) Fig. 2 Reference frames used in this paper. Frame I in purple is the where ⊗ is the Kronecker product and ∇ is the differen- earth-fixed North East Down frame (assumed to be inertial). Frames H ∂ ∂ ∂ tial operator, defined as ∇= [ , ,..., ]. Note that, (blue) and H (red) are body fixed reference frames for MAVs 1 and 2, ∂ x ∂ x ∂ x 1 2 n respectively (Color figure online) accordingly, ∇⊗ h is equivalent to the Jacobian matrix of 123 418 Autonomous Robots (2020) 44:415–441 2.3 Nonlinear system description where R is the 2D rotation matrix from frame H to H : 2 1 We shall study the case where MAV 1 attempts to estimate the cos(Δψ ) −sin(Δψ ) R = R(Δψ ) = (10) relative position of MAV 2. We use p to denote this relative sin(Δψ ) cos(Δψ ) position, such that p = p − p (see Fig. 2). Furthermore, let 2 1 v and a be the linear velocities and accelerations of frame H i i i The matrices S and S are the skew-symmetric matrix 1 2 with respect to frame I expressed in frame H , respectively. equivalent of the cross product, adapted to the 2D case. The Finally, let Δψ represent the difference in heading between matrix S is equal to: MAVs 1 and 2, such that Δψ = ψ − ψ . 2 1 Since the horizontal plane of H matches the horizontal 0 −r plane of I, height of the MAVs from the ground can be treated S = S (r ) = , i = 1, 2 (11) i i i r 0 as a decoupled dimension. This does not affect the observabil- ity result as long as the MAVs are both capable of measuring The variables a and r are inputs into the system and i i and comparing their own height, which is the case. There- MAV 1 must thus have knowledge of these values. How- fore, for brevity, height will not be included in the following ever, these are typically available from accelerometer and analysis. The vectors for the relative position p, the velocity gyroscope data in combination with the appropriate relations v , and the acceleration a can thus be expanded as 2D vec- i i given in Eqs. 7 and 8. tors: p =[ p , p ] , v =[v ,v ] , a =[a , a ] , x y i x ,i y,i i x ,i y,i Finally, Eq. 9 needs to be complemented with an obser- i = 1, 2. vation model. The MAVs should be able to measure the The rate of change of Δψ is Δψ = r − r . Note that the 2 1 relative range between each other, along with their own and value for r is not equal to the yaw rate as would commonly the other’s velocities. Then, the analysis that follows aims to be measured by an on-board rate gyroscope in the body frame study the difference between the following two scenarios: a B . Instead, r is expressed as: i i scenario where the above measurements are the only mea- surements and a scenario where the MAVs are additionally sin(φ ) cos(φ ) i i r = q ˜ + r˜ (7) i i i capable of observing each other’s headings. The situation cos(θ ) cos(θ ) i i where the MAVs can observe a heading is referred to as where q ˜ and r˜ represent the true pitch and yaw rate as would and the situation where a heading is not observed is referred i i be measured by a rate gyroscope, and φ and θ are the roll and i i to as . pitch angles of the MAV. However, for the sake of simplicity, r will be referred to as the MAV’s yaw rate. : The scenario where ψ and ψ are observed is equiv- 1 2 Similarly, a , which is the value for the linear acceleration alent to Δψ (the difference in headings) being observed. of the H frame expressed in coordinates of the H frame, is i i Therefore, for , the observation model is: not equal to what is measured by the on-board accelerometer. Instead, it is equal to: ⎡ ⎤ ⎡ ⎤ h (x) p p A1 ⎢ ⎥ ⎢ ⎥ h (x) c(θ ) s(φ )s(θ ) c(φ )s(θ )) Δψ i i i i i A2 ⎢ ⎥ ⎢ ⎥ a = s (8) y = h (x) = = (12) A A i i ⎣ ⎦ ⎣ ⎦ 0 c(φ ) −s(φ ) h (x) v A3 i i 1 h (x) v A4 2 where s is the specific force measured in the body frame B by the accelerometer of MAV i. Furthermore, c(α) and Note that the observation equation h (x) is slightly A1 s(α) represent short hand notation for cos(α) and sin(α), modified with regards to the previously mentioned mea- respectively. The matrix in this equation consists of the first surements. Rather than observing the range between the two rows of the rotation matrix from the B frame to the H i i two MAVs (i.e. ||p|| ), half the squared range is observed frame. (i.e. p p). This change makes the observability analysis Following the above, the complete state vector of the sys- 2 more convenient without affecting its result. Both ||p|| tem is given by x =[p ,Δψ, v , v ] , and the input 1 2 and p p contain the same information as far as observ- vector is u =[a , a , r , r ] . The continuous time state 2 1 2 1 2 ability of the system is concerned (Zhou and Roumeliotis differential equations can be written as: 2008). ⎡ ⎤ : In this case, the headings of the MAVs are not mea- −v + Rv − S p 1 2 1 B ⎢ ⎥ sured, and it is thus not possible to observe the difference r − r 2 1 ⎢ ⎥ x ˙ = f (x, u) = (9) ⎣ ⎦ in heading Δψ directly. For , the observation model a − S v 1 1 1 B is: a − S v 2 2 2 123 Autonomous Robots (2020) 44:415–441 419 ⎡ ⎤ ⎡ ⎤ h (x) p p B1 Using this identity, it can be verified that the second term in ⎣ ⎦ ⎣ ⎦ y = h (x) = h (x) = v (13) the observation matrix corresponding to h (x) is: B B B2 1 A1 h (x) v B3 2 ⎡ ⎤ −v + Rv 1 2 ⎢ ⎥ ∂R ⎢ p v ⎥ ∂Δψ ⎢ ⎥ ∇⊗ L h = (17) A1 The effect of the difference in the observation equations f ⎢ ⎥ −p ⎣ ⎦ is studied in the following sections. R p At this point, it would be possible to continue calculat- 2.4 Observability analysis with a common heading ing higher order terms for the observability matrix, but in reference practice this is not necessary. The first term of the observ- ability matrix as shown in Eq. 14 already presents a matrix For system , which uses the observation model from of rank 6. Since the state is of size 7, this means that only Eq. 12, the first entry in the observability matrix is equal 1 more linearly independent row needs to be added to the to: observability matrix to provide local weak observability of the system. Furthermore, it is of practical interest to study the ⎡ ⎤ scenarios in which the system is locally weakly observable p 0 0 0 1×2 1×2 ⎢ ⎥ 0 1 0 0 with a minimum amount of Lie derivatives involved in the 0 1×2 1×2 1×2 ⎢ ⎥ ∇⊗ L h =∇ ⊗ h = A A ⎣ ⎦ analysis. This is due to the fact that in practice all signals 0 0 I 0 2×2 2×1 2×2 2×2 are noisy, and the derivative of a noisy signal will be even 0 0 0 I 2×2 2×1 2×2 2×2 noisier. It will be demonstrated that the terms presented in p 0 1×5 = (14) Eq. 17 are sufficient, under certain conditions, to make the 0 I 5×2 5×5 observability matrix full rank. As mentioned, Eq. 14 already shows that the last five columns of the observability matrix are no longer of inter- where I represents an identity matrix of size n × n n × n est to increase its rank. Furthermore, only the observation and 0 represents a null matrix of size m × n. We can m × n of h (x) = p p provides non-zero terms in the first two A1 already deduce simplifying information from Eq. 14 that will 2 columns of the observability matrix. Therefore, the following aid the subsequent analysis. First, note that, for the higher matrix can be constructed by collecting the terms of the first order terms in the observability matrix, the last 5 columns do two columns in the observation matrix belonging to obser- not contribute to increasing its rank, because these columns vation h (x): A1 are populated with an identity matrix. Furthermore, these higher order terms in the observation matrix (corresponding to the observations of Δψ, v , and v ) only have terms in 1 2 M = (18) −v + v R 1 2 those last 5 columns because none of the higher order Lie derivatives corresponding to those observations depend on where the first term is from the zeroth order Lie derivative the state p. For this reason, these need not be computed and (see Eq. 14) and the second term from the first order Lie we can thus omit them for brevity. The remainder of this anal- derivative (see Eq. 17). The system is thus observable with ysis considers only the terms corresponding to observation 1 a minimum amount of Lie derivatives if the matrix given by h (x) = p p. A1 Eq. 18 has two linearly independent rows. By the definition The first order Lie derivative corresponding to the obser- 1 of linear independence, this means that the following condi- vation h (x) = p p is equal to: A1 tion must hold to guarantee local weak observability of the system: L h = p (−v + Rv − S p) (15) A1 1 2 1 − v + Rv = cp (19) 1 2 where c is an arbitrary constant. Next, remembering that S is a skew symmetric matrix, such The condition in Eq. 19 essentially tells us that the relative that S + S = 0 , the following identity is obtained: 1 1 2×2 velocity of the two MAVs should not be a multiple of the relative position vector between the two. For more practical insight, we can extract more intuitive conditions that must ∂p S p = p (S + S ) = p (0 ) = 0 (16) i i 2×2 1×2 also be met for Eq. 19 to hold. These conditions are: ∂p 123 420 Autonomous Robots (2020) 44:415–441 p = 0 (20) 2×1 I. givenbyEqs. 15 and 17, respectively. The second order Lie derivative is equal to: v = 0 or v = 0 (21) 1 2×1 2 2×1 II. L h = (−v + v R )(−v + Rv − S p) B1 1 2 1 2 1 v = Rv (22) 1 2 III. ∂R + p v (r − r ) − p (a − S v ) 2 2 1 1 1 1 The first condition tells us that the x and y coordinates of the ∂Δψ relative position of MAV 2 with respect to MAV 1 should not + p R (a − S v ) (24) 2 2 2 be equal to 0. In practice, this would only be possible if the MAVs were separated by height, for otherwise their physical Some terms in Eq. 24 can be seen to drop out when the dimension would prevent this condition from occurring. The equation is expanded. For example, the yaw rate of MAV 1 second condition tells us that one of the two MAVs needs (r ) cancels out completely. Therefore, Eq. 24 reduces to: to be moving to render the filter observable, and that the observability is indifferent to which of the MAVs is moving ∂R L h = v v + v v − 2v Rv + p v r B1 1 1 2 2 1 2 2 2 (hence the or operator). The third condition tells us that the ∂Δψ MAVs should not be moving in parallel at the same speed, − p a + p Ra − p R S v (25) 1 2 2 2 where the rotation matrix R transforms v to the H frame. 2 1 Whilst these three conditions are easier to consider, it The state derivative of L h can then be shown to be B1 should be noted that they form only a subset of the con- equal to Eq. 26. Once again, note that some terms drop out ditions imposed by Eq. 19. For example, the scenario where (this step has been omitted for brevity). MAV 2 is stationary, and MAV 1 flies straight towards MAV 2, does not violate any of these three conditions. It does, how- ⎡ ⎤ a + Ra 1 2 ever, violate Eq. 19. Therefore, the observability of a state ⎢ ⎥ ∂R ∂R and input combination should be checked against the full ⎢ −2v v + p a ⎥ 1 2 2 ∂Δψ ∂Δψ ⎢ ⎥ ∇L h = (26) B1 condition in Eq. 19. ⎢ ⎥ 2v − 2Rv ⎣ ⎦ 1 2 −2R v + 2v 1 2 2.5 Observability analysis without a common heading reference Just as for , a part of the observation matrix can be extracted for analysis. This time, the first three columns in After determining the conditions under which system is the observation matrix (as opposed to two) are collected for locally weakly observable, we compare it to the system where the observation h (x) = p p. Also, this time the terms B1 the heading measurements are no longer present. We now up to and including the second order Lie derivative are min- consider system , whose observation equation (Eq. 13) imally needed to obtain a full rank observability matrix. The does not include the state Δψ. For this system, the first term following matrix is obtained: in the observability matrix is: ⎡ ⎤ ⎡ ⎤ p 0 p 0 0 0 1×2 1×2 ⎢ ⎥ ⎢ ⎥ ∂R ⎢ ⎥ −v + v R p v M = 1 2 2 (27) ∇⊗ L h =∇ ⊗ h = 0 0 I 0 (23) B B ⎣ 2×2 2×1 2×2 2×2 ⎦ B ∂Δψ ⎣ ⎦ ∂R ∂R 0 0 0 I −a + a R −2v v + p a 1 2 1 2 2 2×2 2×1 2×2 2×2 ∂Δψ ∂Δψ Equation 23 is very similar to Eq. 14, but with the impor- In this case, obtaining the conditions for which this is a tant difference that the row corresponding to the observation full rank matrix is less obvious due to the plethora of terms. of Δψ is null. Consequently, the matrix is only of rank 5, Rather than directly demonstrating linear independence of rather than rank 6. Since the state size is still 7, a minimum the three rows in Eq. 27, the determinant |M | may be com- of two more independent rows must be added to the observ- puted and demonstrated to be non-zero. This is done as ability matrix to make the system locally weakly observable. follows. Recall that p =[ p , p ]. Furthermore, suppose x y Once again only the terms corresponding to the observation −v + v R =[a, b] and −a + a R =[c, d]. Then, 1 2 1 2 h (x) = p p have terms that could increase the rank of the B1 matrix M can be written as: 2 B observability matrix. This means that this time a minimum ⎡ ⎤ of two more Lie derivatives must be calculated. p p 0 x y ⎢ ⎥ It can be verified that the first derivative L h , and thus B1 ∂R ⎢ ⎥ ab p v 1 M = 2 (28) ∂Δψ ⎣ ⎦ its state-derivative ∇L h , are exactly the same as for . B1 f A ∂R ∂R Therefore, these need not be calculated anymore and are cd −2v v + p a 1 2 2 ∂Δψ ∂Δψ 123 Autonomous Robots (2020) 44:415–441 421 The determinant of M can be computed using a cofactor expansion along the last column of M . This results in: ∂R |M |=−p v (dp − cp ) B 2 x y ∂Δψ ∂R ∂R + −2v v + p a (bp − ap ) 1 2 2 x y ∂Δψ ∂Δψ (29) (a) Intuitive condition 2 (b) Intuitive condition 3 Now, the following identity can be used: − p p y x bp − ap = ab = ab A , (30) x y p p x y 0 −1 where A = . Substituting back the original expressions for [a, b], [c, d], and [ p , p ], the determinant of M becomes: x y B ∂R (c) Unintuitive case 1 (d) Unintuitive case 2 |M |=−p v (−a + a R )Ap B 2 1 2 ∂Δψ Fig. 3 Representations of four unobservable state and input combina- ∂R ∂R tions. The relative position p, the velocities v , and the accelerations a + −2v v + p a 1 2 2 i i ∂Δψ ∂Δψ of MAVs 1 and 2 are depicted × (−v + v R )Ap (31) 1 2 v = sRv or (a = 0 or a = 0 ) (36) III. 1 2 1 2×1 2 2×1 This can be simplified and written as: where s is an arbitrary constant. ∂R |M |= p −a v + v a B 2 1 2 1 The first condition tells us that the determinant |M | is ∂Δψ zero if the x and y coordinates of the origins of frames H ∂R and H coincide. This is the same as for . The second + 2 v v v − v v R Ap (32) 1 2 1 2 2 2 ∂Δψ condition tells us that both MAVs need to be moving. This movement may be either through having a non-zero velocity, This system is thus locally weakly observable with a min- or through having a non-zero acceleration (the violation of imum amount of Lie derivatives if |M | is non-zero. Due to which is shown in Fig. 3a). The third condition tells us that the specific properties of the A matrix in this determinant the MAVs may not move in parallel, as in Fig. 3b, unless at (see Eq. 30), the following equation must hold to render the least one of the MAVs is also accelerating at the same time. determinant |M | non-zero: Note that this time the MAVs are not allowed to move in parallel regardless of whether they are moving at the same ∂R p −a v + v a 1 2 1 2 1 speed or not, hence the scalar multiple s. By comparison, the ∂Δψ equivalent condition for only specified that the MAVs ∂R + 2v v v − v v R = kp (33) 1 2 1 2 2 may not move in parallel at the same speed. ∂Δψ In order to study these intuitive conditions in further detail, we evaluated how the observability of the system is affected where k is an arbitrary constant. once the relative position p between the MAVs changes. By Just as for Eq. 19, we can extract a more intuitive subset varying the p and p values of the vector p around the x y of conditions for Eq. 33 that also definitely must be met for originally set values for p (as in Fig. 3), we analyzed the the system to be observable. These conditions are: observability of the system for different relative positions, p = 0 (34) while keeping the velocities and accelerations constant. The 2×1 I. measure for observability was obtained by interpreting the (v = 0 or a = 0 ) and 1 2×1 1 2×1 II. (v = 0 or a = 0 ) (35) 2 2×1 2 2×1 Please check Appendix A for the derivation of Eq. 36 from Eq. 32. 123 422 Autonomous Robots (2020) 44:415–441 MAVs ending up in an unobservable state are still significant within an operating area of 100 m . The three intuitive conditions we extracted are only a sub- set of all conditions imposed by Eq. 33. This means that there exist state and input combinations that satisfy the three intuitive conditions, but that do not satisfy Eq. 33. In order to study what the implications of the full unobservability condi- (a) Intuitive condition 2 (b) Intuitive condition 2 tion in Eq. 33 are, we used the Nelder–Mead simplex method Fully unobservable Partially unobservable to find other points in the state and input space that violate the full observability condition. Two examples are shown in Fig. 3c, d. These scenarios do not violate any of the intuitive conditions given by Eqs. 34–36. The relative position is non- zero, both MAVs have non-zero velocities and accelerations, and the velocity vectors are not parallel. Nevertheless, they violate Eq. 33. Based on this, color maps for the unobservable conditions in Fig. 3c, d are given in Fig. 4e, f, respectively. (c) Intuitive condition 3 (d) Intuitive condition 3 Both color maps of Fig. 4e, f clearly show a non-linear Fully unobservable Partially unobservable relationship between the relative position vector p and the observability of the system. Moreover, both maps show a different non-linear relationship. Figure 4e shows more of a hyperbolic relationship, whereas the unobservable region in Fig. 4f looks more elliptical. It can be shown that dif- ferent conditions show yet other relationships between the observability of the system for different relative positions p. Moreover, these relationships only show what happens in two dimensions, which are for the two entries in the vector (e) Unintuitive case 1 (f) Unintuitive case 2 p. In reality, the observability condition in Eq. 33 presents Fig. 4 Color map of observability for different relative positions. The an 11-dimensional problem. It is therefore still difficult to velocities and accelerations of the MAVs are kept as depicted by Fig. 3 deduce general rules from these results. What the latter two and the values for p =[ p , p ] are varied over a 10 m range (Color x y figure online) color maps do have in common is that the unobservable rel- ative positions are in all cases vastly outnumbered by the observable relative positions. This is different than what was observed for situations that would violate any of the more meaning of Eq. 33. It essentially tells that the left hand side intuitive conditions in Eqs. 35 and 36. of the equation should not be parallel to the relative position vector p. Therefore, a practical measure of observability is 2.6 Comparison of the two systems how far away the left hand side of Eq. 33 is from being par- allel to p, which can be tested with the cross product. The Finally, the results from the observability analysis of both absolute value of the cross product is then used as a measure systems will be compared. These will show what the practical of the observability of the system. This paper considers a implications are when switching from a system that relies on cross product less than a value of 1 to be unobservable. In a common heading reference to a system that does not. theory, only when the cross product is 0 does it actually rep- A primary result of the analysis is that removing the rel- resent an unobservable condition. However, such a threshold ative heading measurement results in a system that requires facilitates visibility on the plots and provides insight on what at least one extra Lie derivative in the range observation to the near-unobservable conditions are and their proportion in make the system locally weakly observable. This is an impor- relation to the remaining conditions. tant result, because it tells us that the heading-independent For the case of the second (Eq. (35), Fig. 4a) and the third system relies more heavily on the range equation than intuitive condition (Eq. (36), Fig. 4c) it can be seen that a . Without a heading observation, the range measurement varying p does not affect the unobservability in the color map. serves to estimate a total of three states, as opposed to two Once an acceleration vector is added to the state of MAV 1 in . Some of this information is contained in the second in both cases, specifically a =[0.30.3] , the color plots in derivative of the range observation, and it is a well known fact Fig. 4b, d show that for a set of relative positions, the system that the derivative of a noisy signal will be even noisier. In does become observable again. However, the chances of the practice, this means that any system that wishes to perform 123 Autonomous Robots (2020) 44:415–441 423 range-based relative localization without a heading depen- vation models for and are also kept almost the A B dency needs an accurate and low-noise range measurement. same as given in Eqs. 12 and 13, with the only adjustment Another important result is that the criteria posed for that now the full range ||p|| is observed, rather than half specify that both MAVs must be moving. Contrarily, the cri- the squared range p p. Additionally, in line with earlier teria for specify that only one of the MAVs must be research on range-based relative localization on real robots moving. Whilst this result might not be as relevant for MAV (Coppola et al. 2018), we decided to use an EKF on-board teams, as the MAVs will typically be moving anyway, this of the real-world MAVs because of its low processing and result can be important for other applications of range-based memory requirements. relative localization. Think, for example, of the case where a An EKF has parameters that need to be tuned, namely: single static beacon is used to estimate the position of a fly- the initial state, the system and measurement noise matri- ing MAV using only range sensing and communication. The ces, and the initial state covariance matrix. The initial state results of our analysis show that is not observable in this is an important setting that will be described where appro- case, and thus a common heading reference must be known priate in the next sections. The matrices are always tuned to for such a system to work or, alternatively, the MAV must correspond to the actual expected values. The measurement track the beacon and then communicate its estimate back to noise matrix is tuned based on the expected quality of the the beacon. Note that, in the case where one of the partici- measurement variables, and similarly for the system noise pants is not moving, if we were to continue our analysis of matrix. However, since some of the simulations also make to higher order Lie derivatives then it would still not be use of perfect measurements and since a zero entry in the possible to make the observability matrix full rank, so that measurement noise matrix is not possible, the correspond- the condition holds generally. ing entries are then given a small value of 0.1m.Weuse A third difference is found in the condition for parallel 0.1 m based on what is eventually used on the EKF on-board movement of the two MAVs. requires that the MAVs of the real MAVs. By using UWB antennas for range mea- should not move in parallel at the same speed, meaning that surements, we can expect standard deviations of 0.1–0.3m there should be a non-zero relative velocity between the two around the true value. Our experimental set-up is described MAVs. Instead, requires that the MAVs should not be in Sect. 4.3. moving in parallel regardless of speed. Therefore, even if the second MAV were to be moving twice as fast as the first, 3.2 Kinematic, noise-free study of unobservable the filter would not be observable as long as the direction situations of movement is the same. However, can bypass this condition in some cases if either of the MAVs is also simul- In the first simulated study, the two MAVs that are studied taneously accelerating. Similarly, it can be shown that have kinematic trajectories that can be described analyti- is able to bypass the parallel motion condition with accel- cally. The MAVs also have perfect noise-free knowledge of eration, although a second order Lie derivative would be the inputs and measurements. The kinematic and noise-free necessary in that case. situation is used to confirm conclusions drawn in the observ- ability analysis performed in Sect. 2. The two MAVs involved in the EKF are designated MAV 3 Verification through Simulations 1 and MAV 2. MAV 1 shall be the host of the EKF and shall attempt to track the relative position of MAV 2, a.k.a. In this section, we further investigate the conclusions drawn the tracked MAV. The latter does not contain an EKF. The from the analytical observability analysis. At first, a kine- following three scenarios are studied: matic, noise-free study is performed to verify and confirm the differences in the observability conditions for and 1. MAV 1 (host) is moving and MAV 2 (tracked) is station- . Afterwards, the influence of noise and disturbances on ary. the filter are studied. 2. MAV 1 (host) is stationary and MAV 2 (tracked) is mov- ing. 3.1 Filter design 3. MAV 1 (host) and MAV 2 (tracked) are both moving in parallel to each other at different speeds. The filter of choice, used throughout the rest of this paper, is an Extended Kalman Filter (EKF), since this type of filter fits These scenarios have been chosen because they match intuitively with how the state-space system was described in the intuitive conditions where is observable, but A B Sect. 2. The EKF also uses a state differential model and an is not. These are limit cases and therefore provide valuable observation model. The state differential model can thus be verification of the analytically found differences between the kept exactly as the one given earlier in Eq. 9. The obser- two systems. 123 424 Autonomous Robots (2020) 44:415–441 1.5 1.5 The simulations will show whether these different scenar- ios have convergent EKFs or not. The focus of this analysis 1 1 is on the estimation of the relative position p and the rela- tive heading Δψ. Since the velocities are observed directly, 0.5 0.5 these are observable regardless of the situation, and are thus not shown. 0 0 00.511.5 2 00.511.5 2 The initial velocities of MAVs 1 and 2 are initialized to their true value, since these are not the variables of interest Fig. 6 EKF convergence for case 1: MAV 1 (host) moving, MAV in this analysis. The initial position and relative heading are B 2 (tracked) stationary initialized with an error, the specifics of which will be given in the respective scenarios. The yaw rates and headings of both MAVs are kept at 0 rad/s and 0 rad, respectively. The result, since the relative position is typically the variable of EKF runs at a frequency of 50 Hz. interest, rather than the difference in heading. The error measure throughout this paper is the Mean Abso- The reason that this occurs lies in the information pro- lute Error (MAE). The separate x and y errors in the relative vided by the first state differential equation. This equation location estimate p are combined according to the norm tells us that p ˙ =−v + Rv − S p. The only dependency 1 2 1 ||p|| . This choice was made because the separate errors in x that this equation has on the relative heading Δψ is in the and y directions offer little additional insight and are usually rotation matrix R. Therefore, as long as v is equal to 0,the identical. differential equation for p ˙ has no dependency on the rela- tive heading between the two MAVs. The convergence of p therefore remains unaffected. The situation changes when it 3.2.1 MAV 1 (host) moving, MAV 2 (tracked) stationary is v that is non-zero and v that is zero. This case will be 2 1 studied next. Previous analytical analysis has shown that is locally weakly observable, while is not observable. This result is therefore expected to be reflected in the simulation as well. 3.2.2 MAV 1 (host) stationary, MAV 2 (tracked) moving In the simulation, MAV 1 (the host) is positioned at p = 1,0 [0, 0] and has a constant velocity v =[1, 0] .MAV 2 For this case, all of the parameters are the same as for case (the tracked MAV) is positioned at p =[1, 1] with no 1, with the only difference being that now v = 0 and 2,0 velocity or acceleration. The initial guess of MAV 1 for the v =[1, 0] . The analytical observability analysis has relative position and heading of MAV 2 is [p ˆ , Δψ ] = shown that this scenario is locally weakly observable for . [0.1, 0.1, 1] . This means that the initial estimation error in As expected, it can be seen in Fig. 7 that both the errors for p , p , and Δψ is thus equal to 0.9, 0.9, and 1, respectively. x y p and Δψ converge rapidly to 0. The observability analysis As can be seen in Fig. 5, both the relative position p error has then shown that is not locally weakly observable in and the relative heading Δψ error quickly converge to 0. this scenario. Indeed, Fig. 8 shows that both ||p|| and Δψ Contrarily, the observability analysis of has shown that do not converge and that ||p|| diverges. B 2 this scenario is not locally weakly observable, because the This time, because v is not equal to 0, the state differential second condition is violated, i.e., one of the MAVs is not equation for the relative position of MAV 2 has a depen- moving. However, Fig. 6 shows that the ||p|| error converges 2 dency on the relative heading state Δψ. Since Δψ does not to 0 just as rapidly as for . A more thorough inspection converge to its true value, and eventually settles at an error shows that the unobservable state of the system is in fact Δψ, of approximately 1.5 rad, there is a large inaccuracy in the which is the one that does not converge. This is a favorable state differential equation for p ˙ . This consequently results 1.5 1.5 1.5 1.5 1 1 1 1 0.5 0.5 0.5 0.5 0 0 0 0 00.511.5 2 0 0.5 1 1.5 2 00.5 11.5 2 0 0.5 1 1.5 2 Fig. 5 EKF convergence for case 1: MAV 1 (host) moving, MAV Fig. 7 EKF convergence for case 2: MAV 1 (host) stationary, MAV A A 2 (tracked) stationary 2 (tracked) moving 123 Autonomous Robots (2020) 44:415–441 425 5 1.5 4 1.5 0.5 0 0 0 0.5 0 5 10 15 20 0 5 10 15 20 00.511.5 2 0 0.5 1 1.5 2 Fig. 10 EKF convergence for case 3: MAV 1 (host) and MAV 2 Fig. 8 EKF convergence for case 2: MAV 1 (host) stationary, MAV (tracked) moving in parallel 2 (tracked) moving in an ever increasing error in p, because MAV 1 essentially has a decreasing error in Δψ. However, the convergence of ‘thinks’ that MAV 2 is flying in a different direction than it Δψ is very slow. Furthermore, the error for p continues to really is. rise indefinitely. This shows the reason as to why it is generally not possible This result concludes the noise-free simulations that com- for a stationary vehicle (or beacon) to be tracking a moving pare the performance of the filters for and . These A B vehicle using range-only measurements and velocity infor- simulations verify that the differences between the intuitive mation without a common heading reference. Contrarily, it is unobservable conditions that we derived for the two filters in possible for a moving vehicle to be tracking a stationary vehi- Sect. 2 also hold true when translated to a simulation envi- cle or beacon’s position. This is entirely caused by the fact ronment. that a vehicle will always be ‘aware’, in its own body frame, of the direction it is moving in and hence does not need a 3.3 Kinematic noisy range measurements study of convergent estimate of the relative heading with respect to observable situation the vehicle it is tracking. However, when the vehicle it is tracking does move, it needs this convergent estimate of the Whilst a noise-free study demonstrates the feasibility of the relative heading to know which direction the other is moving proposed filter and can verify the differences between in. A and , it is also important to study the filter’s performance when presented with noisy data. Not only is this more repre- 3.2.3 MAV 1 (host) and MAV 2 (tracked) moving in parallel sentative of the filter’s performance in practice, but it also can at different speeds be used to verify one of the main conclusions that were drawn in the observability study, namely that needs informa- Finally, the case where both MAVs are moving in parallel, tion present in the second derivative of the range data to be but at different speeds, is studied. Once more, most of the observable, compared to only a first derivative for .Itis parameters are kept the same as those presented under case consequently expected that, with all other parameters fixed, 1. This time, the velocity of MAV 2 is set to v =[1, 0] will perform increasingly worse as the range measure- and the velocity of MAV 1 is set in a parallel direction, but ment noise increases. with twice the magnitude (v = 2v =[2, 0] ). 1 2 In this study, we steer away from unobservable scenarios. According to the observability analysis, this is one of the The intent now is to study both filter’s performances for the limit cases where is still just observable, but is not. A B case where the filters are known to be observable, in order to Indeed, Fig. 9 shows convergent behavior for , whereas compare their performance. For this reason, the trajectories Fig. 10 shows divergence for . Note that the filter for B B of MAV 1 (host) and MAV 2 (tracked) are designed so as to stay clear of the unobservable situations and to excite the filter properly through relative motion. The trajectories that we 1.5 1.5 devised for this study are perfectly circular, and we assume 1 1 that the MAVs fly at the same height. The trajectories, depicted in Fig. 11, can be described in 0.5 0.5 polar coordinates [ρ, θ ]. MAV 1 flies a circular motion at an angular velocity θ = ω with radius ρ , and MAV 2 1 1 1 0 0 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 ˙ flies at angular velocity θ = ω with radius ρ . To ensure 2 2 2 that both MAVs have sufficient relative motion, one MAV flies clockwise and the other counter clockwise, such that Fig. 9 EKF convergence for case 3: MAV 1 (host) and MAV 2 (tracked) moving in parallel ω =−ω . Moreover, the radius of MAV 2’s trajectory is 1 2 123 426 Autonomous Robots (2020) 44:415–441 is tracking MAV 2. The filter is fed perfect information on all state and input values, except for the measurement of the range ||p|| between the two MAVs. The range mea- MAV 1 surement are artificially distorted with increasingly heavy Gaussian white noise. The measured range fed to the fil- teristhus ||p|| =||p|| + n(σ ), where n(σ ) is a 2,m 2 R R MAV 2 Gaussian white noise signal with zero mean and standard deviation σ . The standard deviations that are tested are 0 m (noise free), 0.1m,0.25 m, 0.5m,1m, 2m,4m, and 8 m. In practice, a standard deviation of 8 m could be Fig. 11 Two circular trajectories for MAV 1 and MAV 2 considered quite high, but this is intentionally chosen with the intent to observe a significant difference in the error. ◦ Since this study keeps all the other measurements and inputs 1 m larger than MAV 1’s trajectory, and is offset by 90 in noise free, the noise on the range measurement needs to angle, such that ρ = ρ − 1 and θ = θ + . 1 2 1 2 be higher to get a significant increase in the localization The radius difference in the trajectories ensures that the error. situation p = 0 is avoided, and the angle offset ensures This time the EKF runs at 20 Hz, which is more repre- that the relative velocities are distributed more or less sentative of our real-world set-up, discussed later in Sect. 4. equally in x and y directions. In these simulations, for The described flight trajectory is simulated for 20 s each run, simplicity, both MAVs keep a steady heading such that 2 which is thus one complete revolution of the circular trajec- ψ = ψ and r = r = 0. Switching back to Carte- 1 2 1 2 tory. The EKF is initialized to the true state to exclude the sian coordinates, the trajectories can thus be analytically effects of initialization. described as follows. MAV 2’s position vector in time is given For each particular noise standard deviation, both the filter by: for and for are simulated with 1000 different noise A B realizations. For each realization the MAE of the estimated ρ cos(ω t ) 2 2 p (t ) = (37) 2 p with respect to its true value is computed, again by con- ρ sin(ω t ) 2 2 sidering the combined error in the estimate of ||p|| .After 1000 realizations, the Average MAE (AMAE) is computed MAV 1’s position vector in time can be described by: to extract the average performance for all noise realiza- tions. (ρ − 1)cos(−ω t + ) 2 2 The resulting AMAE values for systems and are p (t ) = 1 A B (ρ − 1)sin(−ω t + ) 2 2 2 given in Table 1 and are plotted in Fig. 12. As expected, at very low noise values on the range measurement, both the −(ρ − 1)sin(−ω t ) 2 2 = (38) filters for and have very similar error performance. A B (ρ − 1)cos(−ω t ) 2 2 With no noise on the range measurements, the difference between the two filters is only 4 mm. However, since the filter The equations for v (t ) and a (t ) can be obtained by taking i i for is more sensitive to noise on the range measurements, the time derivatives with respect to p (t ), i = 1, 2. Note that it quickly starts to perform worse than as the noise on this is not true for the general case, since H is a rotating the range measurement is increased. frame of reference, but in this case it is possible because the This result is in line with the analytical results presented MAVs keep a constant heading equal to 0 rad. in Sect. 2. However, it also raises the question of whether 2π By setting ρ = 4 m and ω = rad, the trajectory of 2 2 removing the dependency on a common heading reference MAV 2 becomes a circle with a radius of 4 m that is traversed poses any advantage, since performs consistently bet- in 20 s. To comply with the previously defined constraints, ρ ter than . The reason for this result lies in the fact 2π and ω are 3 m and − rad/s, respectively. These values are that the studied scenario uses perfect measurements for all representative of what a real MAV should easily be capable the sensors except for the measured range. As mentioned of and result in relative velocities of about 1 m/sin x and y in the introduction, the heading observation is notoriously directions between the two MAVs. troublesome and unreliable, especially in an indoor environ- The study will test the performance of the relative local- ment (Afzal et al. 2010). Therefore, it would be valuable ization filter as seen from the perspective of MAV 1, which to study what would happen to this analysis in the case where the heading estimate is not perfect. This is presented Note that the approach is also valid when the headings change, this next. simplification was only done to simplify the trajectory design used in the simulations. 123 Autonomous Robots (2020) 44:415–441 427 Table 1 Average Mean Absolute Error for and over 1000 runs with different noise standard deviation on the range measurement A B Range noise σ (m) 0 0.1 0.25 0.5 1 2 4 8 AMAE (cm) 2.3 3.4 6.2 10.8 19.3 37.7 72.9 118.2 AMAE (cm) 2.7 4.5 8.5 15.1 27.1 52.5 101.8 172.8 2.4 1 2.2 0.8 1.8 0.6 1.6 1.4 0.4 1.2 0.2 0.8 0.6 0.4 0.2 Fig. 13 Disturbance on the relative heading measurement in time, for an amplitude A of 1 rad study, = 1, resulting in a disturbance lasting approximately Fig. 12 AMAE in estimate of ||p|| for and , the error bars B A indicate the standard deviation 4 s, and t = 5 s, such that the disturbance peaks at 5 s into the flight. How such a disturbance looks is presented in Fig. 13 for an amplitude A of 1 rad. 3.4 Kinematic noisy range measurements and Several amplitudes of the disturbance are tested, namely heading disturbance study for observable 0rad,0.25 rad, 0.5 rad, 1 rad, and 1.5 rad. The final amplitude situation of 1.5 rad results in a maximum heading estimate error of almost 85 , which is approximately equal to the amplitude In order to compare the results obtained with an imperfect of the disturbance shown by Afzal et al. (2010). Note that heading measurement to those obtained in the previous sec- the disturbance is introduced directly on the measurement of tion, the same trajectories are simulated (as in Eqs. 38 and Δψ (the difference in headings between two MAVs). This is 37 for MAVs 1 and 2, respectively). All the other simulation the situation that would occur if one of the two MAVs would parameters are also kept the same, with one exception. This fly in a locally perturbed area. time, a disturbance is introduced on the heading measure- Since the parameter of interest is how the filter for ment. The simulated disturbance is modeled to look similar compares to the filter for , the results are represented as to how a real local perturbation in the magnetic field would a percentage comparison of the relative localization errors perturb a heading estimate. The actual magnetic perturbation between the two filters. This is visually presented in Fig. 14. and the corresponding heading error are taken from the work In the figure, a positive % means that the filter for per- of Afzal et al. (2010), where indoor magnetic perturbations forms worse than the filter for .At0%, marked by a are studied. It was found that the obtained disturbance on the dotted line, both filters perform equally well. heading estimate looks similar to a Gaussian curve, and in The comparison shows that as the applied disturbance this analysis it is thus modeled as such. amplitude on the heading measurement provided to system The disturbance on the heading estimate in time d(t ) is is increased, the region for which performs better A B modeled as: than expands. In the case of the largest disturbance, with A equal to 1.5 rad, filter even performs better at a range 2 B − (t −t ) ( ) d(t ) = A · e (39) noise σ equal to 8 m. This result reinforces the presumption that it is not always Here, the amplitude of the disturbance (in radians) is given better to include a heading measurement in the filter, provided by A , the parameter controls the width of the Gaussian that the range measurement is of a high enough accuracy. curve, and t controls the location of the curve in time. For this We will use this insight for the real-world implementation. 123 428 Autonomous Robots (2020) 44:415–441 other than the fact that the follower must follow the leader at a non-zero horizontal distance, which typically is the objective. 2. The second conditions (Eq. 35) tells us that both MAVs must be moving. As far as leader–follower flight is con- -20 cerned, this is automatically accomplished as long as the -40 leader is not stationary. -60 3. The third condition (Eq. 36) is especially impactful for leader–follower flight. It specifies that the MAVs should -80 not be moving in parallel (regardless of speed), unless -100 they are also accelerating. A lot of research on leader– follower flight aims to design control laws that would result in fixed geometrical formations between differ- Fig. 14 Percentage error comparison between and for dif- B A ferent disturbance amplitudes A . Positive percentage means ent agents in the formation. This is typically achieved performs worse than by specifying desired formation shapes, or desired inter- agent distances for members in the swarm (Turpin et al. 2012;Guetal. 2006; Chiew et al. 2015; Saska et al. 2014). In the experimental set-up in Sect. 4, we will use Ultra Wide By the very nature of fixed geometries, that would result Band (UWB) radio modules to obtain range measurements in parallel velocity vectors. between MAVs. To give an idea of what type of range noise standard deviations can actually be achieved in practice, in The third condition requires a different approach to leader– the executed experiments with real MAVs, the UWB modules follower flight. Rather than flying in a fixed formation, it is resulted in ranging errors with standard deviations between also possible for the follower to fly a delayed version of the 0.1 and 0.3 m. If we assume a normally distributed ranging leader’s trajectory. As long as the leader’s trajectory is not a error, based on the results shown in Fig. 14, it is then clear pure straight line for long periods of time, this will result in that the heading-independent system would be the pre- relative motion between the leader and follower. This is the ferred choice for all heading disturbance amplitudes (except, approach taken in this paper. trivially, for the situation where there is little to no heading This solution should also help to prevent the MAVs from disturbance at all). getting stuck in an unobservable situation that is not covered by Eqs. 34 to 36, but that is covered by the full observabil- ity condition in Eq. 33. We concluded that for the scenarios 4 Leader–follower flight experiment that are numerically found to be unobservable according to Eq. 33, changing the relative position p only slightly can In this section we demonstrate the heading-independent filter already result in an observable situation. In the proposed in practice, which is used for leader–follower flight in an method of having the follower fly a time-delayed version indoor scenario. of the leader’s trajectory, the relative position vector p will naturally change if the leader’s trajectory is not a straight 4.1 Leader–follower flight considerations line. Before designing an actual control method to accomplish 4.2 Leader–follower formation control design leader–follower flight, let’s first reflect on the previous observability analysis results from Sect. 2 and their impli- We want to construct a leader–follower control method that cations with respect to leader–follower flight. We know that results in the follower flying a delayed version of the leader’s in order to have an observable, heading-independent, system, trajectory. As it turns out, this type of control can be directly the combined motion of the leader and follower has to meet accomplished with the information provided by the relative the observability condition presented in Eq. 33. We further localization filter. know that in order to to meet this condition, the three intu- Consider the schematic in Fig. 15. It shows two arbitrary itive conditions presented by Eqs. 34 to 36 certainly have to trajectories in dotted lines. Just as for the previous section, be met. Let’s first consider these conditions: Interestingly, such oscillatory behaviors are also found in the insect 1. The first condition (Eq. 34) specifies that the relative posi- world, be it for finding the gradient of pheromone trails (Couzin and tion between leader and follower must be non-zero. This Franks 2003), recognizing landmarks (Degen et al. 2016), or estimating condition has little implication to leader–follower flight, depth of 3D structure (Werner et al. 2016). 123 Autonomous Robots (2020) 44:415–441 429 e(t ) = R p(t − τ) − Δp (42) n H (t )H (t −τ) n t −τ 1 n 1 n n The vector Δp represents how much the follower has t −τ moved from time t −τ until t as defined in frame H (t −τ). n n 1 n This vector can be calculated using information available to the follower: Δp = R v (t )dt (43) H (t −τ)H (t ) 1 t −τ 1 n 1 t −τ Finally, one more piece of information is needed in order to be able to design a control law for the follower MAV, which is the model of the follower MAV and how it responds to control inputs. In this paper, it is assumed that the MAV already Fig. 15 Control problem for leader–follower flight. In blue is MAV 1’s has stable inner loop control running on board, such that it trajectory in time p (t ). In red is MAV 2’s trajectory in time p (t ).The 1 2 directly can take velocity commands. It is further assumed desire is for MAV 1 to drive e(t ) to 0 for t →∞ (Color figure online) that with the inner loops in place, the MAV responds like a very simple first order delay filter to velocity commands, such that the differential equation for the velocity becomes: MAV 1 tracks and follows MAV 2. At the top, in blue, is the trajectory for MAV 1, which is represented by its position −1 v˙ = τ (v − v ) (44) vector in time p (t ). On the bottom, in red, is the trajectory for 1 1c 1 MAV 2, p (t ). Suppose the desire is for the follower (MAV −1 where τ is a diagonal matrix. The values along the diagonal 1) to follow the leader’s trajectory (MAV 2) with a time delay −1 of τ are the inverse of the time constants that character- τ . The control problem for MAV 1 can be expressed as the desire to accomplish p (t ) = p (t − τ). ize the delay of the system with respect to a control input 1 2 v . This is only an approximation of how the actual MAV Let t indicate the current time at which a control input 1c behaves, but it will be shown to be sufficient to accomplish must be calculated. At the current time, MAV 1 has a body the desired behavior. fixed reference frame H (t ), whose origin is p (t ). At time 1 n 1 n With all this information in place, a control law can t − τ , MAV 1 knows the relative position of the leader in its be designed. The control law is designed using Nonlinear own body fixed frame H (t − τ), since this information is 1 n Dynamic Inversion (NDI) principles. In order to use NDI, provided by the relative localization filter. However, for this a state space model is required for the situation at hand. A control method to work, MAV 1 must have knowledge of very similar state space model to the one used for the relative where the leader’s old position is at the current time t .This localization filter can be used. Define the state vector as: value of interest is depicted by the vector e(t ) in Fig. 15;it is the positional error with respect to the desired follower’s position at time t . x ¯ = e ,Δψ, v , v¯ (45) 1 2 Let R be the rotation matrix from frame H at H (t )H (t ) i 1 i 2 time t ,toframe H at time t , defined as: The state vector is similar to the one defined before for 2 i 1 the relative localization filter, with a few small changes. First t t 2 2 of all, e = e(t ) represents the current positional error for the cos(Δψ | ) −sin(Δψ | ) i i t t 1 1 R = , (40) H (t )H (t ) follower MAV 1 with respect to the leader’s old position. Sec- i 1 i 2 t t 2 2 sin(Δψ | ) cos(Δψ | ) i i t t 1 1 ondly, Δψ and v¯ represent again the difference in heading between two MAVs and the velocity of MAV 2, except now Δψ | is the change in heading angle for MAV i from ¯ i Δψ is the difference in heading between frame H (t ) and 1 1 time t to time t , which can be calculated as: 1 2 H (t − τ), and v¯ is the delayed leader’s velocity at time 2 2 t − τ , such that v¯ = v (t − τ). 2 2 Similarly, define a new input vector as: Δψ | = r (t )dt (41) i i u ¯ = v , a¯ , r , r¯ (46) t 1c 2 1 2 The current positional error for the follower MAV 1, where v is the actual control input fed to MAV 1, and a¯ 1c 2 depicted in Fig. 15, can be defined as: and r¯ represent the same values as a and r , except that 2 2 2 123 430 Autonomous Robots (2020) 44:415–441 they are delayed versions. Therefore a¯ = a (t − τ) and At this point the following control law can be chosen: 2 2 r¯ = r (t − τ). 2 2 −1 Finally, a new set of state differential equations can be v = D (i − b(x, u)) (55) 1c defined as: ⎡ ⎤ with i now a virtual control input. −v + Rv¯ − S e 1 2 1 This control law results in a fully linearized differential ⎢ ⎥ ⎢ r¯ − r ⎥ 2 1 equation for the positional error of the follower, since sub- ˙ ¯ ⎢ ⎥ x ¯ = f (x ¯ , u ¯ ) = (47) ⎢ ⎥ −1 stitution of the control law from Eq. 55 in Eq. 51 results in τ (v − v ) ⎣ 1c 1 ⎦ the following differential equation: a¯ − S v¯ 2 2 2 e ¨ = i (56) ¯ ¯ ¯ where R = R(Δψ) and S = S (r¯ ). 2 2 2 The state that we wish to control is the current positional Which can be shown to be exponentially stable if the fol- error that MAV 1 has with respect to the delayed leader’s lowing virtual control is implemented: position, so the state e. This state can be represented as: i =−K e − K e ˙ (57) p d e = Hx ¯ (48) K , K > 0 (58) p d With H given by: 4.3 Experimental set-up H = I 0 (49) 2×2 2×5 One of the main findings in the observability study and the The derivative of the control variable with respect to time simulation results is that the localization error scales more is equal to: steeply with range noise for system than for .Itis B A therefore important to use sensors that can provide accurate ¯ ¯ e ˙ = L e = Hf =−v + Rv¯ − S e (50) 1 2 1 ¯ relative ranging measurements. In this work, we chose to use Ultra Wide Band (UWB) The second derivative of the control variable: based radio transceivers. UWB has recently gained attention within the domain of ranging. UWB signals are character- ¨ ˙ e = L e = (∇⊗ e) · f ized by their fine temporal and spatial resolution (Correal ∂R et al. 2003), which leads UWB based systems to be able to, ¯ ¯ = −S v¯ −I R · f 1 2 2×2 ∂Δψ for example, resolve multipath effects more easily (Win and ∂R Scholtz 1998). Ultimately, this leads to an accurate rang- =−S −v + Rv¯ − S e + v¯ (r¯ − r ) 1 1 2 1 2 2 1 ∂Δψ ing performance, which is important if using the heading independent filter. Another advantage of UWB is its relative −1 ¯ ¯ − I τ (v − v ) + R a¯ − S v¯ 2×2 1c 1 2 2 2 robustness to interference from other radio technologies due = Dv + b(x, u) (51) to the fact that it operates on an (ultra) wide range of fre- 1c quencies (Liu et al. 2007; Foerster et al. 2001; Molisch et al. With D equal to: 2006). The UWB ranging hardware used in the experiments is the −1 D =−I τ (52) 2×2 ScenSor DWM1000 module sold by Decawave. The rang- ing algorithm that is employed is a particular implementation and b(x, u) equal to: of the Two-Way Ranging (TWR) method (Neirynck et al. 2016). In order to fuse ranging data with velocity, accelera- ∂R tion, height, and yaw rate data in the localization filter, these b(x, u) =−S −v + Rv¯ − S p + v¯ (r¯ − r ) 1 1 2 1 2 2 1 ∂Δψ variables are also communicated between MAVs by using the −1 ¯ ¯ UWB devices. The same UWB messages used in the TWR + I τ v + R a − S v (53) 2×2 1 2 2 2 protocol are also used to communicate these variables. The UWB module transceiver has been installed on the This can further be reduced to: Parrot Bebop 2 platform. The Bebop 2 runs custom autopilot b(x, u) =−S −v + Rv¯ − S p 1 1 2 1 https://www.decawave.com/products/dwm1000-module. ∂R −1 − v¯ r + I τ v + Ra¯ (54) 2 1 2×2 1 2 https://www.parrot.com/us/drones/parrot-bebop-2. ∂Δψ 123 Autonomous Robots (2020) 44:415–441 431 software designed using the open-source autopilot frame- 4.4.1 Leader–follower flight with velocity and height work Paparazzi UAV. Paparazzi UAV provides the stable information from a MCS inner loop control loops for the Bebop 2 using Incremental NDI (INDI, Smeur et al. (2015)). This allows us to control First, the case where velocity and height information is pro- the outer loop by giving the computed velocity commands to vided by the MCS is studied. In Fig. 16, the trajectory flown the INDI inner loops. by the follower is compared to the trajectory of the leader. Velocity and height measurements are also necessary for The x and y coordinates are compared separately for part the relative localization filter. In the initial experiments, they of the flight in Fig. 17a, b. In Fig. 18, a time composition are provided by an overhead motion capture system (MCS) of overhead camera images is given for 5 s of flight as an by OptiTrack. In a second iteration of the experiment, illustration. The follower’s position is shown at seven time they are fully provided by on-board sensors. The velocity instances during these 5 s, and is compared to the leader’s data is obtained from the MAVs’ on-board bottom-facing trajectory. camera using Lucas–Kanade optical flow. Height is mea- A total of 200 s of leader–follower flight were logged sured using an on-board ultrasonic sensor that the Bebop 2 and will be analyzed here. During this time, several laps of is equipped with by default. At all times, the acceleration the designed trajectory were executed. The trajectories in and yaw rate measurements are obtained from the MAVs’ Figs. 16, 17 and 18 indeed show that the follower is success- on-board accelerometers and gyroscope, respectively. The fully tracking a delayed version of the leader’s trajectory. The experiments are first conducted with two MAVs (one leader actual error distribution for the norm of the relative location and one follower), detailed in Sect. 4.4, and then performed estimate ||p|| is shown in Fig. 19. The errors have a mean again with three MAVs (one leader and two followers), value of 18.4 cm and a maximum value of 77.5 cm, at max- detailed in Sect. 4.5. imum inter-MAV distances up to 5 m. Since, in this experiment, the velocity and height mea- surements were provided with high accuracy by the MCS, 4.4 Leader–follower flight with one follower one would expect the primary source for the localization error to be the ranging error from the UWB modules. How- The experiment with one follower MAV consists of one ever, inspection of the ranging error actually shows a pretty Bebop 2 following another Bebop 2 using the control law presented in Sect. 4.2. At first, right after take off, the MAVs fly concentric circles just like the ones shown in Fig. 11. leader This procedure is there to make sure that the EKF running follower on-board the MAVs has time to converge to the correct result, such that by the time the follower MAV is instructed to start following the leader, the follower has a correct estimate of -1 the relative location of the leader. -2 When leader–follower flight is engaged, the trajectory of the leader has been designed to sufficiently excite the the rela- -3 tive localization filter during the leader–follower flight and to -4 -4 -3 -2 -1 0 1 2 3 4 5 6 decrease the likelihood of being stuck in unobservable states. This has been done by introducing frequent turns in the trajec- tory to have changing relative velocities and accelerations. Fig. 16 The trajectories of leader and follower during experiment with MCS height and velocity The follower is instructed to follow the leader’s trajectory with a time delay of τ = 5s. 6 6 It is important to note that, for safety reasons, the norm 4 4 of the follower’s commanded velocity ||v || during both 1c 2 2 2 experiments is saturated at 1.5m/s. The measure is taken 0 0 because the MAVs were flying in a relatively small confined -2 -2 area (10 m by 10 m). This change does however have conse- -4 -4 50 60 70 80 90 100 50 60 70 80 90 100 quences for the performance of the follower’s tracking, which is discussed further in the next sections. (a) (b) Fig. 17 The trajectory of the follower in the a x- and b y-coordinate, http://wiki.paparazziuav.org/wiki/Main_Page. compared to the delayed trajectory of the leader for the experiment with MCS height and velocity http://optitrack.com/. 123 432 Autonomous Robots (2020) 44:415–441 Fig. 20 Histogram of the ranging error during experiment with MCS height and velocity Fig. 18 Time composition of overhead camera images of leader and follower MAV in time, for the experiment with MCS height and velocity. Indicated in red and marked by p (t ), is part of the leader’s trajectory. The leader’s final position is indicated by p (t = 0). Seven points in time of the follower’s trajectory are indicated in the image. According to the control objective, p (t = 5) should equal p (t = 0) (Color figure 1 2 online) Fig. 21 Histogram of the tracking error ||e|| for the follower during experiment with MCS height and velocity all messages were received following an interval of 200 ms. In one instance, the interval reached 470 ms, an order of magnitude larger than the average. It is not hard to imag- ine the unfavorable effect that such events can have for the Fig. 19 Histogram of the localization error for the follower during relative localization estimate. It is therefore not coincidental experiment with MCS height and velocity that the largest localization error recorded during the flight also corresponds to one of those times where the UWB mod- favorable error distribution. A histogram of the ranging error ules dropped frames, causing the update rate of the relative throughout the flight is given in Fig. 20. The mean of the localization filter to also drop. ranging error is close to zero (about − 6.4 cm) and the errors We now turn our attention to the tracking error of the fol- are well distributed around this mean. This is therefore not lower MAV. The tracking error distribution ||e|| is given in the main cause of the occasionally higher relative localization Fig. 21. The mean of the distribution is equal to 46.1cm errors. and the maximum error is 1.32 m. Of course, part of this The most clearly identifiable cause for the relative local- error is caused by a relative localization error from the fol- ization error is the occasional dropping of frames by the UWB lower’s perspective, which will inevitably affect the tracking modules. The update rate of the relative localization filter performance. However, since the relative localization error is equivalent to the UWB messaging rate, because the fil- is considerably lower than the tracking error, there must be ter is updated every time that the UWB modules produce more sources to the error. a new ranging result (using a callback function). For two One source of error is the fact that the follower’s response UWB modules, this corresponds to an update rate of about to a velocity command v is modeled as a first order delay. 1c 25 Hz, corresponding to a time step of approximately 40 ms. In reality, the MAV has some overshoot with respect to com- However, the modules occasionally drop frames, causing the mands, which is not modeled by this first order delay. This time step to spike up. Over the flight, 2% of all messages were model mismatch by itself might not be that harmful to the received following an interval of more than 40 ms, and 1% of performance, since the control law would respond with more 123 Autonomous Robots (2020) 44:415–441 433 aggressive velocity commands as a reaction to the MAV not leader behaving as modeled. However, the control law’s freedom follower is severely restricted by the command saturation at 1.5m/s, which means that the follower cannot move as fast as the command law demands. This argument is further supported 0 by a qualitative analysis of the follower’s trajectory with -1 respect to the leader’s trajectory in Fig. 16. The trajectory -2 of the follower often seems to take ‘shortcuts’ with respect -3 to the leader’s trajectory. This falls in line with the expected -4 -4 -3 -2 -1 0 1 2 3 4 5 6 behavior due to the command saturation. The control law is designed not only to track the trajectory of the leader in space, but also in time. As the follower starts lagging behind Fig. 22 Trajectory of leader and follower during experiment with only the leader more than the desired τ = 5 s, the follower starts on-board sensing and processing to take shortcuts in the trajectory to catch up with the leader. This error would be less prevalent if the command saturation 6 6 4 4 were increased. 2 2 0 0 4.4.2 Leader–follower flight with only on-board -2 -2 measurements -4 -4 50 60 70 80 90 100 50 60 70 80 90 100 We now demonstrate the workings of the proposed methods in this paper when only on-board sensing is used. In this set- (a) (b) up, the follower MAV does not use any MCS information. Fig. 23 The trajectory of the follower in the a x- and b y-coordinate, Instead, the velocity information comes from Lucas–Kanade compared to the delayed trajectory of the leader for the experiment with optical flow measurements while the height is derived from only on-board sensing the on-board ultrasonic sensor. Similarly, the leader MAV directly communicates optical flow velocities and ultrasonic height measurements (along with accelerations and yaw rate from the IMU) to the follower MAV for use in the relative localization filter. The MCS is only used to log ground truth data and for the leader to safely fly its trajectory. No MCS data is used by the follower at all. Again, 200 s of leader–follower flight with full on-board sensing took place successfully and will be analyzed here. The trajectory of the follower with respect to the delayed leader’s trajectory is compared in Figs. 22 and 23. Further- more, another time composition for 5 s of flight where the follower is tracking the leader is given in Fig. 24. The main qualitative difference with respect to the situ- Fig. 24 Time composition of overhead camera images of leader and ation where the MCS was still used for velocity and height follower MAV in time, for the experiment with only on-board sensing. information is the fact that the follower’s trajectory appears Indicated in orange and marked by p (t ), is part of the leader’s trajectory. less smooth. Otherwise, the performance seems qualitatively The leader’s final position is indicated by p (t = 0). Six points in time of the follower’s trajectory are indicated in the image. According to similar. The follower still appears to take ‘shortcuts’ with the control objective, p (t = 5) should equal p (t = 0) (Color figure 1 2 respect to the leader’s trajectory, although the increased dis- online) order in the follower’s trajectory makes this less apparent. The tracking error distribution for the on-board sensing case is given in Fig. 25. The mean tracking error is 50.8cm data. This can be mainly attributed to the fact that the mea- and the maximum error is 1.47 m. The relative localization surements that have been replaced (the height and velocity of error is given in Fig. 26. Here, the mean error is 22.6 cm and both MAVs) are actually also accurately measured on-board. themaximumerroris75.8 cm, at maximum MAV distances The primary reason as to why the trajectory of the follower up to 5.2m. with on-board sensors still seems slightly more disordered is The performance when using only on-board sensing is the fact that the follower has difficulty to accurately control very similar to when using the MCS for height and velocity its altitude when using only on-board sensing. The follower 123 434 Autonomous Robots (2020) 44:415–441 (a) Message intervals (b) Distribution Fig. 27 Messaging rate over flights with 2 and 3 MAVs Fig. 25 Histogram of the tracking error ||e|| for the follower during experiment with only on-board sensing and processing turns to message their data to the others. This causes a drop in communication rate every time that a new UWB mod- ule is introduced. For this reason, the UWB range update rate reduced from about 25 Hz with 2 MAVs, to about 16 Hz with 3 MAVs (i.e., with an interval of 62.5 ms). Additionally, the introduction of the additional module was accompanied with an increase in the communication outages. 5.8% of all messages recorded during our flight were received with an interval longer than the nominal one of 62.5 ms. 2.9% of all messages were received with an interval of 100 ms or more. The messaging intervals for the flight with two MAVs and three MAVs are compared in Fig. 27. Fig. 26 Histogram of the localization error for the follower during This time, due to the lack of space available, there is no experiment with only on-board sensing and processing initialization flight procedure to give the EKFs of the fol- lowers time to converge. Instead, the MAVs are placed in starting positions and orientations that roughly match with now purely relies on height measurements from its ultrasonic what EKFs on-board the MAVs are initialized to. Although sensor. The update rate of this sensor is low, and in between this placement is done purely by eye, it is proven to be suffi- measurements the follower uses (noisy) accelerometer data cient to safely start the leader–follower flight. to update its height. This sometimes causes the follower to The leader flies the same trajectory as before. The first believe its altitude is different than it really is, causing it to follower follows this trajectory with a τ = 4 s delay, and the rapidly ascend or descend. This takes up thrust, restricting the second follower follows it with an τ = 8 s delay. Once again, follower’s ability to maneuverer accurately in the horizontal 200 s of successful flight data is logged and analyzed. plane due to thrust saturation. An overhead camera image for the flight with MCS height and velocity data is presented in Fig. 28, giving an idea of how 4.5 Leader–follower flight with two followers the experiment looked like. The trajectories for this flight are displayed in Fig. 29 for the leader and two followers. For To demonstrate that the methods in this paper can also scale the flights with only on-board information, the trajectories to more than one follower, the leader–follower flight is also are shown in Fig. 30. performed with two follower MAVs instead of one. This is As for the case with just one follower, we see that the done both with MCS height and velocity data and with only followers tend to take shortcuts with respect to the leader’s on-board sensing. trajectory. Furthermore, the flights using only on-board infor- For this purpose, The UWB messaging protocol is adapted mation are less smooth than those with MCS height and to allow every MAV to perform ranging with every other velocity information. For the flight with MCS data, follower 1 MAV. The MAVs also communicate a unique (pre-assigned) has a MAE for the relative localization error of only 15.8cm. identification number within the UWB messages. The fol- By comparison, follower 2 has a MAE of 43.9 cm. Further- lowers can use this identification number to determine which messages originate from the leader so that they individually Furthermore, videos of our experiments are available keep track of the leader as before. In the implemented com- at https://www.youtube.com/playlist?list=PL_KSX9GOn2P-- munication protocol, the UWB modules on the MAVs take aEr4JtFl7SV3LO5QZY4q. 123 Autonomous Robots (2020) 44:415–441 435 4.6 Comparison of flights In this section we present the relative localization and track- ing MAE of the various flights that were executed. We also discuss in more detail the most noteworthy differences between experiments. All the errors are presented in Table 2. The first note- worthy observation is the fact that, for the experiment with two followers, the tracking performance of the second fol- lower is worse than for the first follower in both the MCS and fully on-board case. This is a byproduct of the fact that the proposed leader–follower control method inherently relies on integration of velocity information in time. As the delay with which the follower must follow the leader increases, so Fig. 28 Overhead camera image of leader and two followers using does the period of time over which the follower must inte- MCS height and velocity. In orange is the leader’s trajectory marked at 0.5 s intervals (Color figure online) grate its velocity. This is subject to drift, which shows in the tracking performance. This effect is more noticeable in the fully on-board case, since the velocity estimates from optical flow methods are less accurate than the ones computed by the MCS. Another result is that the localization error for follower 2 in the MCS case is higher than for the first follower. This can be explained, in part, by the fact that follower 2 has a larger -1 mean range with respect to the leader than follower 1 does leader -2 (4.2 m compared to 2.9 m). To inspect this deeper, we looked follower 1 -3 at the logged range between the MAVs. It was found that fol- follower 2 -4 lower 2 had substantially larger ranging errors with the leader -4 -3 -2 -1 0 1 2 3 4 5 6 than follower 1. This can be appreciated in Fig. 31, where the ranging error distributions are compared. In both cases, Fig. 29 Trajectory of leader and two followers using MCS height and the mean is close to zero, yet the distribution for follower velocity 2 is significantly wider. An investigation of the flight logs revealed that this is most likely associated with a combina- tion of antenna orientation and relative flight trajectory. If the error is analyzed, it can be seen that it is subject to periodic peaks which match the period of the relative bearing between the drones. This can be seen in Fig. 32. However, this effect does not appear to be purely caused by relative bearing, but rather a combination of relative bearing and relative location -1 -2 of the drones, relating to how the antennas were mounted on leader -3 the drones, whereby the presence of the drone itself likely follower 1 -4 compromised the signal. This also explains why the correla- follower 2 -5 tion is most clear during the first 100 s of flight and the last -4 -3 -2 -1 0 1 2 3 4 5 6 50 s, but it is less clear between 90 and 150 s of the log. If the trajectory is analyzed, between 100 and 150 s is when Fig. 30 Trajectory of leader and two followers using only on-board the follower 2 trajectory started varying slightly (follower information 2 began to take ‘shortcuts’) bringing the drones at different relative locations to eachother. Such correlations should be investigated further in future work over a variety of flights with different platforms and scenarios. more, followers 1 and 2 have MAE for the tracking of 42.9cm and 70.3 cm, respectively. The flight with only on-board A final result that stands out is that both followers 1 and 2 have substantially higher localization errors in the on-board sensing resulted in a relative localization MAE of 51.8cm and 53.6 cm. The tracking MAE this time was 58.6 cm and case than was found for the on-board experiment with a 98.4cm. single follower. This result appears to be due to a combina- 123 436 Autonomous Robots (2020) 44:415–441 Table 2 Comparison of mean localization (loc.) errors and mean tracking (track.) errors for all performed experimental flights, both for MCS and fully on-board (on-b.) flights 1 follower 2 followers MCS on-b. MCS 1 MCS 2 on-b. 1 on-b. 2 Loc. error (cm) 18.4 22.6 15.8 43.9 51.8 53.6 Track. error (cm) 46.1 50.8 42.9 70.3 58.6 98.4 5.1 Remarks on observability Section 2.5 showed that for a specific set of velocities, accel- erations and relative positions for both MAVs, the system will become unobservable. To directly integrate the full observ- ability condition in the design of a leader–follower system is difficult due to its high dimensionality. By having followers (a) Follower 1 (b) Follower 2 fly a delayed version of the leader’s trajectory, it is possible to naturally vary the relative positions between leader and fol- Fig. 31 Comparison between ranging error distributions for follower 1 lower, as long as the leader’s velocity changes in time. Given and 2 for the flight with MCS height and velocity data the sparsity of unobservable relative positions, we therefore postulated that this control behavior would be sufficient to limit unobservable situations. Furthermore, even if an unob- servable situation were to occur, this would only be for a short period of time, as the relative position continuously changes and the system automatically transitions back to being observable. Having performed the experiments and collected all the ground truth data, it is now possible to test whether this assumption is valid. All the parameters needed to evaluate Eq. 33 have been logged during the experiments and can be Fig. 32 Range error and relative bearing between leader and follower inserted into Eq. 33 to check the observability of the relative 2 during flight localization filter in time. In line with our previous analy- sis, the measure of observability of the system is represented by the cross product between the left hand side of Eq. 33 tion of factors. The increased communication traffic caused and the relative position vector p. Once more, we shall take a decrease in the filter update rate and also resulted in an a threshold of 1. Although theoretically only a value of 0 increase in ranging frames dropped. Follower 2, as mentioned would indicate an unobservable system, the higher threshold above, showed a worse ranging performance than follower is chosen to account for noise in the data. 1. Follower 1, in turn, had slightly less accurate optical flow With the chosen threshold, the unobservable data points velocity estimates than were obtained with the single fol- for the MCS and the on-board flight are 4.76% and 4.75% lower flight (21 cm/s MAE compared to 15 cm/s before) and of all the data points, respectively. The unobservable points also slightly higher ranging errors than for the single follower are spread in time, thus giving the system ample observable flight (15 cm MAE compared to 8 cm before). All factors data in between to recover from the short periods of unob- combined, both followers suffered a comparable degradation servability. Furthermore, isolated events of unobservability in localization performance. are not expected to cause issues. Instead, they can gradually cause an increase in the localization error in time. This has also been confirmed by the simulations in Sect. 3. Further qualitative inspection of the data does not show 5 Discussion a correlation between the unobservable regions of the flight and the relative localization error. To demonstrate this, the In this section we revisit the observability analysis from localization error is compared to the observability of the filter Sect. 2 with the obtained experimental data. We also present in Fig. 33 for a small segment of the flight with MCS infor- some remarks on the scalability of this methodology to larger mation. For easier comparison, the observability has been groups of MAVs. 123 Autonomous Robots (2020) 44:415–441 437 0.3 1 In our experiments, the update rate reduced when flying with two followers instead of one. It is to be expected that 0.8 adding more MAVs requires additional data communication, 0.2 0.6 yet a drop from 25 to 16 Hz is quite significant for adding just one more MAV. In this case, the reduction was due to 0.4 the communication protocol used during the experiments. In 0.1 future work there should be efforts to determine how to tackle 0.2 this, which is a necessary step in order to solve scalability 0 0 issues that will otherwise arise when introducing even more 130 132 134 136 UWB modules. As an example, it should be possible to significantly Fig. 33 Comparison between localization error and the observability of increase the messaging rate to allow for more drones. In the filter. An unobservable value of ’1’ means the observability measure these experiments we operated the UWB modules on the low- is within the threshold of unobservability est data rate settings (110 kbps). Furthermore, every message contains a lengthy preamble of 2048 bits, resulting in substan- tial protocol overhead for every transmitted message which reduced to a binary value, where a value of ‘1’ indicates may not be necessary (the actual payload of the UWB mes- that the system is within the threshold of unobservability at sages is less than 200 bits). The maximum data rate that the that time. It can be seen that there is no apparent correlation UWB modules support is actually 6.8 Mbps and the pream- between the two parameters. ble can be as short as 64 bits. These would allow for much The relative localization insights in this paper have been higher update rates, even with three or more MAVs. One aimed at leader–follower flight, yet they extend to other appli- would, however, need to examine what such a change would cations of MAVs in the real world. For different scenarios have on ranging accuracy and stability. such as area coverage, where the relative motion between MAVs may be more (seemingly) random, it is expected that unobservable conditions would be more rare (Cornejo and 6 Conclusion Nagpal 2015). Therefore, based on our results, we expect that the relative localization performance would also not suf- The work in this paper has shown the feasibility of heading- fer from unoboservable conditions even in other tasks. independent range-based relative localization on MAVs. We now know that removing the dependency on a common head- 5.2 Remarks on scalability ing between MAVs has two main disadvantages: the motion of agents must meet more stringent conditions to be observ- The experimental results in Sect. 4 show that the methods in able and the relative localization becomes more susceptible this paper can successfully scale to two followers that follow to noise on the range measurements. The clear advantage, a leader in a confined area. Even when full on-board sensing is on the other hand, is that the filter is no longer affected by used by the followers, more than three minutes of successful local disturbances in Earth’s magnetic field. As shown by autonomous flight were demonstrated, with no pilot input. our simulations, small magnetic perturbations can already Despite the successful results, analysis of the data does lead to a large negative impact, showing how a heading- show a substantial rise in localization and tracking errors independent method can actually perform better than the when scaling up to two MAVs. This raises the question of heading-dependent method. what would happen if even more MAVs are added to the The results of our observability analysis have shown that experiment; would this be viable? leader–follower flight is a difficult task when using the pro- One of the results we found is that there is a correlation posed relative localization method, where a simple fixed between the tracking performance of the follower and the geometry formation flight is not possible. Instead, we needed time delay with which it follows the leader’s trajectory. The to develop a method that allows one MAV to follow another follower that tracked with a time-delay of 8 s showed consis- MAV’s trajectory with a certain time delay while the leader tently larger tracking errors than the followers with 4 s and 5 s flies in a curved trajectory. This approach has been shown to delays. An alternative solution to the two follower problem is stay sufficiently clear from unobservable conditions, which having one follower follow the leader and the other following has allowed us to successfully demonstrate leader–follower the first follower. With such an arrangement, both followers flight in practice. could follow another MAV with the same time delay. This Using only on-board sensory information, one MAV can setup has not yet been studied in this work, but could prove localize another MAV with a mean error of just 22.6 cm over to be a better alternative to explore in future research. 200 s of leader–follower flight. This consequently allows the Localization error [m] 438 Autonomous Robots (2020) 44:415–441 MAV to track another MAV’s trajectory with a mean error the leader to fly in an oscillatory trajectory in order to help of 50.8 cm. The method has been demonstrated to work also the followers avoid unobservable states. However, the con- with two followers tracking the same leader. troller on-board of the followers could also be such that their In a wider context, this work showcases a fundamental trajectory is automatically adapted in order to preemptively connection between relative localization and behavior for avoid unobservable conditions. This would put less require- teams (or swarms) of robots. We have shown that the con- ments on the leader, which would then be free to fly any type straints included in the observability analysis have to be taken of trajectory, and would also be a more general and robust into account when designing the behavior of the robots. solution. To this end, the leader could also communicate addi- This enables the robots to make a better use of their sen- tional information such as its planned trajectory over a time sors, which in turn provides for a better final performance. horizon. For example, in our case, the intuitive conditions extracted Finally, considering the hardware used in the experiments, from the observability analysis informed us that the leader– the importance of consistent, high frequency communication follower behavior should not be such that the MAVs fly in and ranging has become apparent. It would be valuable to a fixed geometry. In general, extracting such intuitive condi- further optimize the frequency and consistency with which tions can help swarm designers understand, at a higher level, ranging messages are exchanged. how the behavior of the individual robots should be designed in order to be in harmony with their relative localization sensors. Videos Videos of the experiments can be found at: https://www. 7 Future work youtube.com/playlist?list=PL_KSX9GOn2P--aEr4JtFl7SV 3LO5QZY4q There are plenty of opportunities to research within the domain of range based relative localization. Certainly, one such opportunity is the initial convergence behavior of the Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm filter. The initial estimate of the EKF is important to quickly ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, converge to a correct estimate of the relative location of and reproduction in any medium, provided you give appropriate credit another MAV. If the initial condition is too different from to the original author(s) and the source, provide a link to the Creative the real situation, the filter has difficulties to converge. One Commons license, and indicate if changes were made. primary problem is that there exist spurious states where the EKF can inititially erraneously converge to. In the future, it would thus be interesting to research methods to address this A Derivation of intuitive condition 3 for problem. Some possible solutions could be to use of more thorough estimation filters (e.g., particle filters), or to run Condition Eq. 36 is expressed as: multiple filters leading to multiple ambiguous states, which would then help to identify the correct estimate more eas- v = sRv or (a = 0 or a = 0 ) ily. Furthermore, with an eye on scalability to larger swarms, 1 2 1 2×1 2 2×1 it would be valuable to explore more thoroughly whether the less intuitive unobservable conditions are, as indicated If a = a = 0 then the general condition (Eq. 32) 1 2 2×1 by our analysis, indeed significantly more unlikely than the reduces to: observable ones. It would also be valuable to research alternative control algorithms to enable the leader–follower flight. A weakness ∂R |M |= 2 v v v − v v R Ap = 0 of the current controller is that it stores and uses the entire B 1 2 1 2 2 ∂Δψ most recent portion of the leader’s trajectory in order to repli- cate it with a certain delay, which is not memory efficient. Therefore: An alternative solution might be to perform real time poly- nomial data fitting on the relative positions of the leader. ∂R The resulting polynomial trajectories could be used to obtain 2 v v v − v v R = 0 1 2 1 2 2 ∂Δψ the velocities and accelerations through analytical deriva- tions of the polynomials. This might result in less data that needs to be stored on-board of the MAVs and also might where R is as in Eq. 10. If this is expanded, we arrive at the lead to smoother trajectories. Moreover, currently we require following: 123 Autonomous Robots (2020) 44:415–441 439 ⎡ ⎤ −2 v − v cos(Δψ ) + v sin(Δψ ) [v v cos(Δψ ) − v v cos(Δψ ) constraints. In 42nd IEEE international conference on decision 1 2 2 1 2 2 1 x x y x y x y ⎢ ⎥ and control (Vol 1, pp. 25–30). https://doi.org/10.1109/CDC.2003. + v v sin(Δψ ) + v v sin(Δψ )] ⎢ 1 2 1 2 ⎥ x x y y 0 = ⎢ ⎥ 2 v cos(Δψ ) − v + v sin(Δψ ) [v v cos(Δψ ) − v v cos(Δψ ) ⎣ 2 1 2 1 2 2 1 ⎦ 1272530. y y x x y x y + v v sin(Δψ ) + v v sin(Δψ )] Brambilla, M., Ferrante, E., Birattari, M., & Dorigo, M. (2013). Swarm 1 2 1 2 x x y y robotics: A review from the swarm engineering perspective. Swarm Intelligence, 7(1), 1–41. https://doi.org/10.1007/s11721- For both elements in the vector above, we can see that the 012-0075-2. following condition must also be respected, from which we Chiew, S. H., Zhao, W., & Go, T. H. (2015). Swarming coordination can derive a condition for the ratio between the velocities. with robust control Lyapunov function approach. Journal of Intel- ligent and Robotic Systems, 78(3), 499–515. https://doi.org/10. 1007/s10846-013-9998-0. 0 = v v cos(Δψ ) − v v cos(Δψ ) 1 2 2 1 x y x y Conroy, P., Bareiss, D., Beall, M., & van den Berg, J. (2014). 3-d + v v sin(Δψ ) + v v sin(Δψ ) reciprocal collision avoidance on physical quadrotor helicopters 1 2 1 2 x x y y with on-board sensing for relative positioning. arXiv preprint 0 = v cos(Δψ ) − v cos(Δψ ) arXiv:1411.3794 2 2 y x Coppola, M., McGuire, K. N., Scheper, K. Y. W., & de Croon, G. C. H. 1 E. (2018). On-board communication-based relative localization + v sin(Δψ ) + v sin(Δψ ) 2 2 x y for collision avoidance in micro air vehicle teams. Autonomous Robots, 42(8), 1787–1805. https://doi.org/10.1007/s10514-018- 9760-3. v sin(Δψ ) + v cos(Δψ ) = v cos(Δψ ) 2 2 2 x y x 1 Cornejo, A., & Nagpal, R. (2015). Distributed range-based relative localization of robot swarms. In H. L. Akin, N. M. Amato, V. Isler − v sin(Δψ ) ,A.F.van derStappen(eds) Algorithmic foundations of robotics v cos(Δψ ) − v sin(Δψ ) XI: Selected contributions of the eleventh international workshop v s 2 2 1 x y on the algorithmic foundations of robotics, Cham: Springer Inter- v s v sin(Δψ ) + v cos(Δψ ) 1 2 2 y x y national Publishing (pp. 91–107). https://doi.org/10.1007/978-3- 319-16595-0_6. Therefore, the following two conditions must hold together: Correal, N.S., Kyperountas, S., Shi, Q., & Welborn, M. (2003). An UWB Relative Location System. In 2003 IEEE conference on ultra wideband systems and technologies (pp 394–397). https://doi.org/ v = s v cos(Δψ ) − v sin(Δψ ) 1 2 2 x x y 10.1109/UWBST.2003.1267871. Couzin, I. D., & Franks, N. R. (2003). Self-organized lane formation v = s v sin(Δψ ) + v cos(Δψ ) 1 2 2 y x y and optimized traffic flow in army ants. Proceedings of the Royal Society of London B: Biological Sciences, 270(1511), 139–146. This brings us to the final condition: https://doi.org/10.1098/rspb.2002.2210. Degen, J., Kirbach, A., Reiter, L., Lehmann, K., Norton, P., Storms, M., et al. (2016). Honeybees learn landscape features during v v cos(Δψ ) − v sin(Δψ ) 1 2 2 x x y = s exploratory orientation flights. Current Biology, 26(20), 2800– v v sin(Δψ ) + v cos(Δψ ) 1 2 2 y x y 2804. https://doi.org/10.1016/j.cub.2016.08.013. Foerster, J., Green, E., Somayazulu, S., Leeper, D., Labs, I. A., Labs, v cos(Δψ ) −sin(Δψ ) v 1 2 x x = s I. A., Corp, I., & Corp, I. (2001). Ultra-wideband technology for v sin(Δψ ) cos(Δψ ) v 1 2 y y short-or medium-range wireless communications. Intel Technol- v = sRv ogy Journal 2 1 2 Gu, Y., Seanor, B., Campa, G., Napolitano, M. R., Rowe, L., Gururajan, S., et al. (2006). Design and flight testing evaluation of formation control laws. IEEE Transactions on Control Systems Technology, References 14(6), 1105–1112. https://doi.org/10.1109/TCST.2006.880203. Guo, K., Qiu, Z., Meng, W., Xie, L., & Teo, R. (2017). Ultra-wideband Achtelik, M., Brunet, Y., Chli, M., Chatzichristofis, S., Decotignie, J. D., based cooperative relative localization algorithm and experiments Doth, K. M., Fraundorfer, F., Kneip, L., Gurdan, D., Heng, L., Kos- for multiple unmanned aerial vehicles in gps denied environments. matopoulos, E., Doitsidis, L., Lee, G. H., Lynen, S., Martinelli, A., International Journal of Micro Air Vehicles, 9(3), 169–186. https:// Meier, L., Pollefeys, M., Piguet, D., Renzaglia, A., Scaramuzza, doi.org/10.1177/1756829317695564. D., Siegwart, R., Stumpf, J., Tanskanen, P., Troiani, C., & Weiss, Hauert, S., Leven, S., Varga, M., Ruini, F., Cangelosi, A., Zufferey, J. S.(2012). Sfly: Swarm of micro flying robots. In 2012 IEEE/RSJ C., & Floreano, D. (2011). Reynolds flocking in reality with fixed- international conference on intelligent robots and systems (pp. wing robots: Communication range vs. maximum turning rate. 2649–2650). https://doi.org/10.1109/IROS.2012.6386281. In 2011 IEEE/RSJ international conference on intelligent robots Afzal, M. H., Renaudin, V., & Lachapelle, G. (2010). Assessment of and systems (pp. 5015–5020). https://doi.org/10.1109/IROS.2011. indoor magnetic field anomalies using multiple magnetometers. In 6095129. 23rd International technical meeting of the satellite division of the Hayes, A. T., & Dormiani-Tabatabaei, P. (2002). Self-organized flock- ing with agent failure: Off-line optimization and demonstration institute of navigation (pp 525–533). with real robots. In 2002 IEEE international conference on Afzal, M. H., Renaudin, V., & Lachapelle, G. (2011). Use of earths mag- robotics and automation (pp. 3900–3905). https://doi.org/10. netic field for mitigating gyroscope errors regardless of magnetic 1109/ROBOT.2002.1014331. perturbation. Sensors, 11(12), 11,390–11,414. https://doi.org/10. Hayes, A. T., Martinoli, A., & Goodman, R. M. (2003). Swarm 3390/s111211390. robotic odor localization: Off-line optimization and validation with Beard, R. W., & McLain, T. W. (2003). Multiple UAV cooperative search under collision avoidance and limited range communication 123 440 Autonomous Robots (2020) 44:415–441 real robots. Robotica, 21(4), 427–441. https://doi.org/10.1017/ control approach. In The American control conference (pp. 2025– S0263574703004946. 2031). https://doi.org/10.1109/ACC.2013.6580133. Hermann, R., & Krener, A. J. (1977). Nonlinear Controllability and Roberts, J. F., Stirling, T., Zufferey, J. C., & Floreano, D. Observability. IEEE Transactions on Automatic Control, 22(5), (2012). 3-D relative positioning sensor for indoor flying robots. 728–740. https://doi.org/10.1109/TAC.1977.1101601. Autonomous Robots, 33(1–2), 5–20. https://doi.org/10.1007/ Hui, C., Yousheng, C., Shing, W. W. (2014). Trajectory tracking and s10514-012-9277-0. formation flight of autonomous uavs in gps-denied environments Roelofsen, S., Gillet, D., & Martinoli, A. (2015). Reciprocal colli- using onboard sensing. In 2014 IEEE Chinese guidance, naviga- sion avoidance for quadrotors using on-board visual detection. tion and control conference (pp. 2639–2645). https://doi.org/10. 2015 IEEE/RSJ international conference on intelligent robots 1109/CGNCC.2014.7007585. and systems (pp 4810–4817). https://doi.org/10.1109/IROS.2015. Iyer, A., Rayas, L., & Bennett, A. (2013). Formation control for coop- 7354053. erative localization of MAV swarms (Demonstration ). In 2013 Roetenberg, D., Luinge, H. J., Baten, C. T. M., & Veltink, P. H. (2005). international conference on autonomous agents and multi-agent Compensation of magnetic disturbances improves inertial and systems (pp. 1371–1372). magnetic sensing of human body segment orientation. IEEE Trans- Kriegleder, M., Digumarti, S. T., Oung, R., & D’Andrea, R. (2015). Ren- actions on Neural Systems and Rehabilitation Engineering, 13(3), dezvous with bearing-only information and limited sensing range. 395–405. https://doi.org/10.1109/TNSRE.2005.847353. In 2015 IEEE international conference on robotics and automation Roetenberg, D., Baten, C. T. M., & Veltink, P. H. (2007). Estimating (pp. 5941–5947). https://doi.org/10.1109/ICRA.2015.7140032 body segment orientation by applying inertial and magnetic sens- Kushleyev, A., Mellinger, D., Powers, C., & Kumar, V. (2013). Towards ing near ferromagnetic materials. IEEE Transactions on Neural a swarm of agile micro quadrotors. Autonomous Robots, 35(4), Systems and Rehabilitation Engineering, 15(3), 469–471. https:// 287–300. https://doi.org/10.1007/s10514-013-9349-9. doi.org/10.1109/TNSRE.2007.903946. Li, X., Zhou, Q., Lu, S., & Lu, H. (2006). A new method of dou- Sahin, ¸ E. (2005). Swarm robotics: From sources of inspiration to ble electric compass for localization in automobile navigation. In domains of application. In E. Sahin ¸ & W. Spears (eds) Swarm 2006 international conference on mechatronics and automation robotics. SR 2004. Lecture notes in computer science, Springer, (pp. 514–519). https://doi.org/10.1109/ICMA.2006.257606. (Vol 3342, pp 10–20). https://doi.org/10.1007/b105069. Liu, H., Darabi, H., Banerjee, P., & Liu, J. (2007). Survey of wireless Saska, M., Vakula, J., & Preucil, L. (2014). Swarms of micro aerial indoor positioning techniques and systems. IEEE Transactions on vehicles stabilized under a visual relative localization. In 2014 Systems, Man, and Cybernetics, 37(6), 1067–1080. https://doi.org/ IEEE international conference on robotics and automation (pp. 10.1109/TSMCC.2007.905750. 3570–3575). https://doi.org/10.1109/ICRA.2014.6907374. Martinelli, A., & Siegwart, R. (2005). Observability analysis for mobile Saska, M., Vonásek, V., Chudoba, J., Thomas, J., Loianno, G., & Kumar, robot localization. In 2005 IEEE/RSJ international conference on V. (2016). Swarm Distribution and Deployment for Coopera- intelligent robots and systems (pp. 1471–1476). https://doi.org/10. tive Surveillance by Micro-Aerial Vehicles. Journal of Intelligent 1109/IROS.2005.1545153. & Robotic Systems, 84(1–4), 469–492. https://doi.org/10.1007/ Merino, L., Caballero, F., Martnez-de Dios, J., Ferruz, J., & Ollero, s10846-016-0338-z. A. (2006). A cooperative perception system for multiple uavs: Schwager, M., Julian, B. J., & Rus, D. (2009a). Optimal coverage for Application to automatic detection of forest fires. Journal of Field multiple hovering robots with downward facing cameras. 2009 Robotics, 23(34), 165–184. https://doi.org/10.1002/rob.20108. IEEE international conference on robotics and automation (pp. Michael, N., Mellinger, D., Lindsey, Q., & Kumar, V. (2010). The 3515–3522). https://doi.org/10.1109/ROBOT.2009.5152815. GRASP Multiple Micro-UAV Test Bed: Experimental evaluation Schwager, M., McLurkin, J., Slotine, J. J. E., Rus, D. (2009b). From of multirobot aerial control algorithms. IEEE Robotics & Automa- theory to practice: Distributed coverage control experiments with tion Magazine, 17(3), 56–65. https://doi.org/10.1109/MRA.2010. groups of robots. In Experimental robotics (pp. 127–136). Berlin: 937855. Springer. https://doi.org/10.1007/978-3-642-00196-3_15. Molisch, A. F., Cassioli, D., Chong, C. C., Emami, S., Fort, A., Kan- Smeur, E. J., Chu, Q., & de Croon, G. C. (2015). Adaptive incremen- nan, B., et al. (2006). A comprehensive standardized model for tal nonlinear dynamic inversion for attitude control of micro air ultrawideband propagation channels. IEEE Transactions on Anten- vehicles. Journal of Guidance, Control, and Dynamics, 38(12), nas and Propagation, 54(11), 3151–3166. https://doi.org/10.1109/ 450–461. https://doi.org/10.2514/1.G001490. TAP.2006.883983. Stirling, T., Roberts, J., Zufferey, J. C., & Floreano, D. (2012). Indoor Mulgaonkar, Y., Cross, G., & Kumar, V. (2015). Design of small, safe navigation with a swarm of flying robots. In: 2012 IEEE interna- and robust quadrotor swarms. In 2015 IEEE international confer- tional conference on robotics and automation (pp. 4641–4647). ence on robotics and automation (ICRA) (pp. 2208–2215). https:// https://doi.org/10.1109/ICRA.2012.6224987. doi.org/10.1109/ICRA.2015.7139491. Turpin, M., Michael, N., & Kumar, V. (2012). Decentralized forma- Nägeli, T., Conte, C., Domahidi, A., Morari, M., & Hilliges, O. tion control with variable shapes for aerial robots. In 2012 IEEE (2014). Environment-independent formation flight for micro aerial international conference on robotics and automation pp. 23–30. vehicles. In 2014 IEEE/RSJ international conference on intelli- https://doi.org/10.1109/ICRA.2012.6225196. gent robots and systems (pp. 1141–1146). https://doi.org/10.1109/ Vásárhelyi, G., Virágh, C., Somorjai, G., Tarcai, N., Szörényi, T., IROS.2014.6942701. Nepusz, T., & Vicsek, T. (2014). Outdoor flocking and forma- Neirynck, D., Luk, E., & McLaughlin, M. (2016). An alternative tion flight with autonomous aerial robots. In 2014 IEEE/RSJ double-sided two-way ranging method. In 2016 13th workshop on international conference on intelligent robots and systems (pp. positioning, navigation and communications (WPNC) (pp. 1–4). 3866–3873). https://doi.org/10.1109/IROS.2014.6943105. https://doi.org/10.1109/WPNC.2016.7822844. Werner, A., Strzl, W., & Zanker, J. (2016). Object recognition in flight: Nguyen, T. M., Zaini, A. H., Guo, K., & Xie, L. (2016). An ultra- How do bees distinguish between 3d shapes? PLOS ONE, 11(2), wideband-based multi-UAV localization system in GPS-denied 1–13. https://doi.org/10.1371/journal.pone.0147106. environments. In 2016 international micro air vehicle competition Win, M. Z., & Scholtz, R. A. (1998). Impulse radio: How it works. and conference (pp 56–61). IEEE Communications Letters, 2(2), 36–38. https://doi.org/10. Quintero, S. A. P., Collins, G. E., & Hespanha, J. P. (2013). Flocking 1109/4234.660796. with fixed-wing UAVs for distributed sensing: A stochastic optimal 123 Autonomous Robots (2020) 44:415–441 441 Yuan, X., Yu, S., Zhang, S., Wang, G., & Liu, S. (2015). Quaternion- Kimberly N. McGuire is a PhD based unscented kalman filter for accurate indoor heading estima- candidate at the faculty of Aero- tion using wearable multi-sensor system. Sensors, 15(5), 10,872– space Engineering of the Delft 10,890. https://doi.org/10.3390/s150510872. University of Technology, concen- Zhou, X. S., & Roumeliotis, S. I. (2008). Robot-to-robot relative trated in autonomous navigation pose estimation from range measurements. IEEE Transactions on on lightweight pocket drones at Robotics, 24(6), 1379–1393. https://doi.org/10.1109/TRO.2008. theMAVlab. Shehas abroad research-interest in embodied inte- lligence for robotics, in both auton omous navigation and cognition. Publisher’s Note Springer Nature remains neutral with regard to juris- In 2012 she received her B.Sc dictional claims in published maps and institutional affiliations. degree in Industrial Design Engi- neering and her M.Sc. degree in the field of Mechanical Engineer- ing in 2014 at the Delft University Steven van der Helm was born of Technology, specialized in biologically inspired robotics. in Zoetermeer, the Netherlands in 1993. He received the B.Sc. and M.Sc. degrees in aerospace engi- Guido C. H. E. de Croon received neering cum laude from Delft Uni- his M.Sc. and Ph.D. in the field versity of Technology in 2015 and of Artificial Intelligence at Maas- 2018 respectively. His specializa- tricht University, the Netherlands. tion during his M.Sc. was in auto- His research interest lies with com- nomous control. He also perfor- putationally efficient algorithms med research on engine technol- for robot autonomy, with a partic- ogy at Yamaha Motor Corporation ular focus on computer vision and in 2016. evolutionary robotics. From 2011- 2012 he worked as a Research Fellow in Artificial Intelligence at the European Space Agency. Cur- rently, he is an Associate Profes- sor at the Micro Air Vehicle lab of Delft University of Technology, Mario Coppola is a Ph.D. can- the Netherlands. didate at the Delft University of Technology, the Netherlands. He is a part of the department of Control and Simulation as well as the department of Space Systems Engineering. His research is on the design of autonomous robot swarms with a focus on meth- ods to develop local agent con- trollers that reach a global objec- tive. He received his M.Sc. in 2016 in Aerospace Engineering from the same university. His res- earch interests lie in autonomous robotics, artificial intelligence, and swarms.
Autonomous Robots – Springer Journals
Published: Mar 4, 2020
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.