Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |

Kapellos08a

K. Kapellos, F. Chaumette, M. Vergauwen, A. Rusconi, L. Joudrier. Vision-based control for space applications. In Int. Symp. on Artificial Intelligence, Robotics and Automation in Space, iSAIRAS'2008, Los Angeles, California, February 2008.

Download [help]

Download Hal paper: Hal : Hyper Archive en ligne

Download paper: Adobe portable document (pdf) pdf

Copyright notice:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. This page is automatically generated by bib2html v217, © Inria 2002-2024, Projet Lagadic/Rainbow

Abstract

This paper presents the work performed in the context of the VIMANCO ESA project. It has the objective of improving the autonomy, safety and robustness of robotics system using vision. The approach we propose is based on an up-to-date recognition and 3D tracking method that allows to determine if a known object is visible on only one image, to compute its pose and to track it in real time along the image sequence acquired by the camera, even in the presence of varying lighting conditions, partial occlusions, and aspects changes. The robustness of the proposed method has been achieved by combining an efficient low level image processing step, statistical techniques to take into account potential outliers, and a formulation of the registration step as a closed loop minimization scheme. This approach is valid if only one camera observes the object, but can also be applied to a multi-cameras system. Finally, this approach provides all the necessary data for the manipulation of non cooperative objects using the general formalism of visual servoing, which is a closed loop control scheme on visual data expressed either in the image, or in 3D, or even in both spaces simultaneously. This formalism can be applied whatever the vision sensor configuration (one or several cameras) with respect to the robot arms (eye- in-hand or eye-to-hand systems). The global approach has been integrated and validated in the Eurobot testbed located at ESTEC

Contact

Francois Chaumette

BibTex Reference

@InProceedings{Kapellos08a,
   Author = {Kapellos, K. and Chaumette, F. and Vergauwen, M. and Rusconi, A. and Joudrier, L.},
   Title = {Vision-based control for space applications},
   BookTitle = {Int. Symp. on Artificial Intelligence, Robotics and Automation in Space, iSAIRAS'2008},
   Address = {Los Angeles, California},
   Month = {February},
   Year = {2008}
}

EndNote Reference [help]

Get EndNote Reference (.ref)