The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects.
The robotic system is composed of three main components. The first is a conceptual manipulation framework based on grasping primitives. The second component is a visual processing module that uses stereo images and biologically inspired algorithms to accurately estimate pose, size, and shape of an unmodeled target object. A grasp action is planned and executed by the third component of the system, a reactive controller that uses tactile feedback to compensate possible inaccuracies and thus complete the grasp even in difficult or unexpected conditions.
Theoretical analysis and experimental results have shown that the proposed approach to grasping based on the concurrent use of complementary sensory modalities, is very promising and suitable even for changing, dynamic environments.
Additional setups with more complicate shapes are being investigated, and each module is being improved both in hardware and software.
This paper introduces a novel, robust, and flexible grasping system based on multimodal integration.
Grzyb, B.J., Chinellato, E., Morales, A. and del Pobil, A.P. (2009), "A 3D grasping system based on multimodal visual and tactile processing", Industrial Robot, Vol. 36 No. 4, pp. 365-369. https://doi.org/10.1108/01439910910957138Download as .RIS
Emerald Group Publishing Limited
Copyright © 2009, Emerald Group Publishing Limited