Autonomous perching and grasping for micro aerial vehicles

Relaxing the assumptions behind many existing approaches to aerial perching and manipulation requires novel solutions using onboard sensors and processing.
08 March 2017
Justin Thomas, Giuseppe Loianno, Kostas Daniilidis and Vijay Kumar

The ability to maneuver micro aerial vehicles (MAVs) precisely relative to specific targets and to interact with the environment (i.e., aerial manipulation) could benefit society by assisting with dangerous jobs, providing useful information, and improving the efficiency of many tasks. For example, precise relative positioning would allow for close inspections of bridges, cell towers, rooftops, or water towers. Aerial manipulation could improve or enable precision farming, construction, repairing structures, transportation of objects, automated recharging or battery replacement, environmental sampling, or perching to turn off motors and reduce power consumption.

Purchase SPIE Field Guide to IR Systems, Detectors and FPAsThe prevalence of commercially available MAVs has risen rapidly, but platforms are currently limited to sensing and data collection tasks. Indeed, many manufacturers are producing aerial robots equipped with cameras. However, none are able to physically interact with objects. Thus, there is a need for solutions empowering aerial robots to closely track, grasp, perch on, and manipulate specific objects of interest. Here, we present an overview of current approaches and challenges for vision-based perching and aerial manipulation. A more extensive discussion is available elsewhere.1

Many existing perching and grasping methods assume that the states of the robot and target are known,2–9 which is a poor assumption and motivates the search for solutions using onboard sensors. Visual-inertial approaches are appealing because the sensors are lightweight, complement each other well, and are sufficient for navigation in unknown environments.10, 11 However, in these cases, the vehicle is controlled with respect to a fixed reference frame, not specific objects. A more appropriate approach for manipulation is visual servoing, which uses visual feedback to control a robot relative to a target object.

There is a foundational body of literature covering monocular visual servoing that discusses the differences between position-based visual servoing (PBVS) and image-based visual servoing (IBVS).12–14 With PBVS, the relative pose of the robot is estimated, and the control law is expressed in the 3D Cartesian space. With IBVS, in contrast, the control law is computed directly from features observed in the image.12 Each has its benefits. For example, PBVS systems can use common odometry filters from the MAV literature, while IBVS is more robust to calibration errors, making it appealing for low-cost, lightweight systems.

In our work,15–17 we explore the coupling between the pose of a robot and the image of a cylinder. The relationship is diffeomorphic, allowing us to relate velocities of the robot in the world frame to velocities of the image features. We can then express the dynamics of the quadrotor in terms of the image features, which can be used to develop and prove stability of an IBVS control law. In addition, we show that the image features are flat outputs of the system, enabling the application of trajectory planning methods for differentially flat systems.18–21 The image sequence in Figure 1 shows sample results.


Figure 1. A sample perch sequence. The quadrotor carries a camera, an onboard inertial measurement unit, a computer (ODROID-Xu3), and a gripper to perch on cylinders at various orientations. All sensing and computation occurs onboard the 722g robot.

One of the main challenges for visual servoing with aerial robotics stems from underactuation. In our work,15–17 we simplify the system by assuming that the visual frame is of fixed orientation, which is achieved by rotating the observed features using the onboard attitude estimate. This requires an accurate attitude estimate, synchronized images, and the ability to estimate the yaw using image features. Our approach results in a decoupling of the attitude dynamics from the translational dynamics in the virtual image, and it allows for planning trajectories in terms of the flat outputs in the image space. A related challenge is either to guarantee that the target will not leave the field of view or to ensure that the robot will still reach the desired relative pose even if the target is temporarily occluded or leaves the field of view.

Difficulty also arises from the fact that quadrotors are high-order systems. As a result, some control approaches assume knowledge of the velocity in the inertial frame,22, 23 which could be a crippling assumption for a lightweight system. To the best of our knowledge, there is a lack of research considering grasping of moving targets. Landing on moving targets was demonstrated,24, 25 but required some limiting assumptions. One of the key difficulties with moving targets is handling the increased complexity of the relative dynamics. Finally, we need to consider a wider variety of object geometries. Our previous work is restricted to cylindrical objects, but could potentially be generalized to any surface of revolution.26

Despite challenges for visual servoing with aerial robotics such as underactuation, high-order dynamics, and computational limitations of onboard computers, we have been able to demonstrate successful results.17 Our next steps will include modeling coupled dynamics with moving targets, consideration of occlusions and limited fields of view, and interaction with arbitrary geometries.

We gratefully acknowledge support from Army Research Laboratory grant W911NF-08-2-0004, Office of Naval Research grants N00014-07-1-0829, N00014-14-1-0510, N00014-09-1-1051, and N00014-09-1-103, and National Science Foundation grants IIP-1113830, IIS-1426840, and IIS-1138847.


Justin Thomas, Giuseppe Loianno, Kostas Daniilidis, Vijay Kumar
University of Pennsylvania
Philadelphia, PA

Justin Thomas joined the University of Pennsylvania in 2011 as a PhD candidate in the Department of Mechanical Engineering and Applied Mechanics and as a member of the GRASP Lab under Vijay Kumar. His research interests include dynamic grasping, aerial manipulation, perching, and vision-based control using MAVs.


References:
1. J. Thomas, G. Loianno, K. Daniilidis, V. Kumar, The role of vision in perching and grasping for MAVs, Proc. SPIE 9836, p. 98361S, 2016. doi:10.1117/12.2224056
2. Q. Lindsey, D. Mellinger, V. Kumar, Construction with quadrotor teams, Autonom. Robots 33(3), p. 323-336, 2012.
3. F. Augugliaro, S. Lupashin, M. Hamer, C. Male, M. Hehn, M. W. Mueller, J. S. Willmann, F. Gramazio, M. Kohler, R. D'Andrea, The flight assembled architecture installation: cooperative construction with flying machines, IEEE Control Syst. 34(4), p. 46-64, 2014.
4. D. Mellinger, Q. Lindsey, M. Shomin, V. Kumar, Design, modeling, estimation, and control for aerial grasping and manipulation, IEEE/RSJ Int'l Conf. Intell. Robots Syst., p. 2668-2673, 2011.
5. J. Thomas, J. Polin, K. Sreenath, V. Kumar, Avian-inspired grasping for quadrotor micro UAVs, Proc. ASME 37th Mech. Robot. Conf. 6A, p. V06AT07A014, 2013.
6. C. C. Kessens, J. Thomas, J. P. Desai, V. Kumar, Versatile aerial grasping using self-sealing suction, IEEE Int'l Conf. Robot. Autom., p. 3249-3254, 2016.
7. D. Mellinger, N. Michael, V. Kumar, Trajectory generation and control for precise aggressive maneuvers with quadrotors, Experimental Robotics 79, p. 361-373, Springer, 2014.
8. A. Kalantari, K. Mahajan, D. Ruffatto III, M. Spenko, Autonomous perching and take-off on vertical walls for a quadrotor micro air vehicle, IEEE Int'l Conf. Robot. Autom., p. 4669-4674, 2015.
9. J. Thomas, M. Pope, G. Loianno, E. W. Hawkes, M. A. Estrada, H. Jiang, M. R. Cutkosky, V. Kumar, Aggressive flight for perching on inclined surfaces, J. Mech. Robot. 8(5), p. 051007, 2016.
10. S. Weiss, D. Scaramuzza, R. Siegwart, Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments, J. Field Robot. 28(6), p. 854-874, 2011.
11. S. Shen, N. Michael, V. Kumar, Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs, IEEE Int'l Conf. Robot. Autom., p. 5303-5310, 2015.
12. S. Hutchinson, G. D. Hager, P. I. Corke, A tutorial on visual servo control, IEEE Trans. Robot. Autom. 12(5), p. 651-670, 1996.
13. F. Chaumette, S. Hutchinson, Visual servo control. I. Basic approaches, IEEE Robot. Autom. Mag. 13(4), p. 82-90, 2006.
14. F. Chaumette, S. Hutchinson, Visual servo control. II. Advanced approaches, IEEE Robot. Autom. Mag. 14(1), p. 109-118, 2007.
15. J. Thomas, G. Loianno, J. Polin, K. Sreenath, V. Kumar, Toward autonomous avian-inspired grasping for micro aerial vehicles, Bioinspir. Biomim. 9(2), p. 025010, 2014.
16. J. Thomas, G. Loianno, K. Sreenath, V. Kumar, Toward image based visual servoing for aerial grasping and perching, IEEE Int'l Conf. Robot. Autom., p. 2113-2118, 2014.
17. J. Thomas, G. Loianno, K. Daniilidis, V. Kumar, Visual servoing of quadrotors for perching by hanging from cylindrical objects, IEEE Robot. Autom. Lett. 1(1), p. 57-64, 2016.
18. R. M. Murray, M. Rathinam, W. Sluis, Differential flatness of mechanical control systems: a catalog of prototype systems, ASME Int'l Congr. Expo., 1995.
19. M. Fliess, J. Lévine, P. Martin, P. Rouchon, Flatness and defect of non-linear systems: introductory theory and examples, Int'l J. Control 61(6), p. 1327-1361, 1995.
20. I. D. Cowling, O. A. Yakimenko, J. F. Whidborne, A. K. Cooke, A prototype of an autonomous controller for a quadrotor UAV, Eur. Control Conf., 2007.
21. D. Mellinger, V. Kumar, Minimum snap trajectory generation and control for quadrotors, IEEE Int'l Conf. Robot. Autom., p. 2520-2525, 2011.
22. T. Hamel, R. Mahony, Visual servoing of an under-actuated dynamic rigid-body system: an image-based approach, IEEE Trans. Robot. Autom. 18(2), p. 187-198, 2002.
23. T. Hamel, R. Mahony, Image based visual servo control for a class of aerial robotic systems, Automatica 43(11), p. 1975-1983, 2007.
24. B. Hérisse, T. Hamel, R. Mahony, F.-X. Russotto, Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow, IEEE Trans. Robot. 28(1), p. 77-89, 2012.
25. D. Lee, T. Ryan, H. Jin Kim, Autonomous landing of a VTOL UAV on a moving platform using image-based visual servoing, IEEE Int'l Conf. Robot. Autom., p. 971-976, 2012.
26. C. J. Phillips, M. Lecce, C. Davis, K. Daniilidis, Grasping surfaces of revolution: simultaneous pose and shape recovery from two views, IEEE Int'l Conf. Robot. Autom., p. 1352-1359, 2015.
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research