Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |

Kapellos06a

K. Kapellos, F. Chaumette, M. Vergauwen, A. Rusconi, L. Joudrier. Vision manipulation of non-cooperative objects. In 9th ESA Workshop on Advanced Space Technologies for Robotics and Automation, ASTRA 2006, Pages 279-286, Noordwijk, The Netherlands, November 2006.

Download [help]

Download Hal paper: Hal : Hyper Archive en ligne

Download paper: Adobe portable document (pdf) pdf

Copyright notice:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. This page is automatically generated by bib2html v217, © Inria 2002-2024, Projet Lagadic/Rainbow

Abstract

This paper presents the work performed in the context of the VIMANCO on-going project. It has the objective of improving the autonomy, safety and robustness of robotics system using vision. Vision is certainly the most adequate exteroceptive sensor to deal with complex and varying environments and for manipulation tasks of non cooperative objects. The approach we propose is based on an up-to-date recognition and 3D tracking method that features many advantages with respect to other approaches. First of all, it allows to determine if a known object is visible on only one image. It also allows to compute its pose and to track it in real time along the image sequence acquired by the camera, even in the presence of varying lighting conditions, partial occlusions, and aspects changes. The robustness of the proposed method has been achieved by combining an efficient low level image processing step, statistical techniques to take into account potential outliers, and a formulation of the registration step as a closed loop minimization scheme. This approach is valid if only one camera observes the object, but can also be applied to a multi-cameras system. Finally, this approach provides all the necessary data for the manipulation of non cooperative objects using the general formalism of visual servoing, which is a closed loop control scheme on visual data expressed either in the image, or in 3D, or even in both spaces simultaneously. This formalism can be applied whatever the vision sensor configuration (one or several cameras) with respect to the robot arms (eye-in-hand or eye-to-hand systems)

Contact

Francois Chaumette

BibTex Reference

@InProceedings{Kapellos06a,
   Author = {Kapellos, K. and Chaumette, F. and Vergauwen, M. and Rusconi, A. and Joudrier, L.},
   Title = {Vision manipulation of non-cooperative objects},
   BookTitle = {9th ESA Workshop on Advanced Space Technologies for Robotics and Automation, ASTRA 2006},
   Pages = {279--286},
   Address = {Noordwijk, The Netherlands},
   Month = {November},
   Year = {2006}
}

EndNote Reference [help]

Get EndNote Reference (.ref)