Tuesday , December 11 2018

Tridimensional Visual Servoing

Gustavo SCHLEYER, Gastón LEFRANC
Escuela de Ingeniería Eléctrica, Pontificia Universidad Católica de Valparaíso, Chile
P.O. Box 4059, Valparaíso, Chile

Abstract: This paper presents a tridimensional visual servoing that gives the information of the position, height and orientation of several objects presented in the working area of a robot manipulator. With this information, the robot manipulator’s effector can pick and place objects to a specific position. A pair of stereo camera produces the feedback obtaining a particular position of the effector of robot. The servoing implemented is based on vision stereo lateral model. This servoing system is tested experimentally in real time on a Scara Manipulator, including the stereo cameras and image processing using Matlab.

Keywords: Visual Servoing Systems, Robotic Manipulators. Stereo vision, Kinematics control.

Gustavo Schleyer obtained the Civil Electronic Engineering and Master of Science at Pontificia Universidad Católica de Valparaíso, in 2008, Valparaíso, Chile. Actually, he is candidate to PhD. His interest in research are in computer vision, images processing, images segmentation, Robotics and Artificial Intelligence.

Gastón Lefranc graduated as Civil Electrical Engineering from Universidad of Chile, received the Master of Science from Northwestern University, Illinois, USA, in 1979. He is Full Professor at Escuela de Ingeniería Eléctrica, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile, since 1974. He has more than 110 papers in Conferences and Journals. He has worked more than 20 Research Projects, including UNIDO Project. He has been Chairman of the IEEE Chile Section, Chairman of the IEEE Chilean Chapters on Control, Robotics and SMC; President of ACCA, Asociación Chilena de Control Automático (related with IFAC) y member of IFAC Technical Committee. His interest in research are in mputer vision, ccomputer networks, Flexible Manufacturing Systems FMS, Petri Nets, Robotics including Colony of robots, Artificial Intelligence, Multiagents, Automatic Control and Modelling.

>>Full text
CITE THIS PAPER AS:
Gustavo SCHLEYER, Gastón LEFRANC, Tridimensional Visual Servoing, Studies in Informatics and Control, ISSN 1220-1766, vol. 18 (3), pp. 271-278, 2009.

1. Introduction

Flexible Manufacturing Systems (FMS) utilizes robotics manipulators in its Flexible Manufacturing Cells to perform different task as pick and place materials, parts or products, assembly a products, and quality control. These tasks require speed and precision, to have some economical advantages and good engineering. Conventional robot manipulators have limited accuracy in positions and need time to perform task.

A servoing system provides these requirements. A servoing system uses a visual system to control the position of a robotics manipulator. This visual servoing does not need to know a priori the coordinates of the work piece and could not need a robot teaching, allowing having not repetitive tasks, especially in assembly. The vision feedback control loops have been introduced in order to increase the flexibility, the speed and the accuracy of robot system [1], [2] – [5].

Vision-based robot control is classified into two groups [6]: position-based and image-based control Systems. In a position-based control system, the input is computed in the three-dimensional Cartesian space (3-D visual servoing) [7]. The position of the target with respect to the camera is estimated from image features corresponding to the perspective projection of the target in the image. There exits several methods to recover the pose of an object, all of them based on the knowledge of a perfect geometric model of the object and the calibration of the camera to obtain unbiased results. In an image-based control system, the input is computed in the 2-D image space (2-D visual servoing) [8]

An image-based visual servoing is robust with respect to camera and to robot calibration errors. However, its convergence is theoretically ensured only in a region around the desired position. Except in very simple cases, the analysis of the stability with respect to calibration errors seems to be impossible, since the system is coupled and nonlinear.

A new approach is called 2-1/2-D visual servoing since the used input is expressed in part in the 3-D Cartesian space and in part in the 2-D image space [9].

There exist several techniques to extract 3D information. Some of them, called direct sense, estimates the distance to an object based in the measurement of the time of the transmission and reception of a wave, known the propagation media. This can be done by laser, ultrasound and radar. The disadvantage is that can measure one point at a time. Other technique is to use the shadow to compute the depth of the object [8]. A method for determining depth from focus [10] relates the distance from camera to objects out of focus, needing two images. The technique using encoded light pattern, the objects are illuminated from one point, in a plane, or a mesh of points through a projector with a position and orientation known respect to the camera [11].

Another technique uses two perpendicular cameras in specific positions obtains two images to compute the space information of the object.

Stereo Vision utilizes two cameras focusing the same object from different views and then to determine from the differences of the images, the distance of the objects by triangulation. There are several models for Stereo Vision as Lateral Model, Axial Model, and Generalized array of stereo cameras [12].

A previous work of the authors, is related with a Servoing System using one camera applied to a pick and place task [13], [14].

In this paper presents a visual system that gives the information of the position, height and orientation of several objects presented in the working area of the robot manipulator. With this information, the robot manipulator’s effector can pick and place objects to a specific position. A pair of stereo camera produces the feedback obtaining a particular position of the effector of robot. The servoing implemented is based on vision stereo lateral model. This servoing system is tested experimentally in real time on a 5 degrees-of-freedom Scara Manipulator, including the stereo cameras and image processing using Matlab.

5. Conclusions

We have presented a visual system that gives the information of the position, height and orientation of several objects presented in the working area of the robot manipulator. This 3D servoing system gives this information, the robot manipulator’s effector can pick and place objects to a specific position. A pair of stereo camera produces the feedback obtaining a particular position of the effector of Scara 7475 robotics manipulator. The servoing implemented is based on vision stereo lateral model.

The system has the capacity of identifying the spatial position and the orientation of several objects in steady state, presented in the common work space of stereo vision cameras, then that information is sent to the manipulator to pick and place objects. The (x, y) robotics manipulator coordinates are obtained applying equations based on centroids of the objects. To compute z coordinate is utilized equations of the Stereo Vision Lateral Model, with a zero adjust term and scale factor to coincide the original coordinate system with the manipulator coordinate system. To compute the orientation angle, an algorithm is used.

The stereo vision is evaluated in a Flexible Assembly Cells.

REFERENCES

  1. PEREZ, M. A., P. A. COOK, Open Platform for Real-time Robotic Visual Servoing, The 10th IASTED International Conference on Robotics and Applications, Honolulu, Hawaii, USA, August 2004.
  2. HUTCHINSON, S., G. D. HAGER, P. I. CORKE, A Tutorial on Visual Servo Control. IEEE Trans. On Robotics and Automation 12, 1996, pp. 651-670.
  3. COLLEWET, C., F. CHAUMETTE, Positioning a Camera with Respect to Planar Objects of Unknown Shape by Coupling 2-d Visual Servoing and 3-d Estimations. IEEE Transactions on Robotics and Automation, 18(3) June 2002, pp. 322-333.
  4. KRUPA, J. Ganglo®, C. Doignon, M.F. de MATHELIN, G. MOREL, J. LEROY, L. SOLER, J. MARESCAUX, Autonomous 3-d Positioning of Surgical Instruments in Robotized Laparascopic Surgery Using Visual Servoing, IEEE Transactions on Robotics and Automation, 19(5):842-853, October 2003.
  5. ASTOLFI, HSU LIU, M.S. NETTO, R. ORTEGA. Two Solutions to The Adaptive Visual Servoing Problem, IEEE Transactions on Robotics and Automation, 18(3), June 2002, pp. 387 – 392,.
  6. WEISS L. E., A. C. SANDERSON, C. P. NEUMAN, Dynamic Sensor Based Control of Robots with Visual Feedback, IEEE J. Robot. Automat, vol. 3, Oct. 1987, pp. 404-417.
  7. WILSON, W.J., C. C. WILLIAMS HULLS, G. S. BELL, Relative End-effector Control Using Cartesian Position Based Visual Servoing, Robotics and Automation, IEEE Transactions on, Volume: 12 Issue 5, Oct. 1996, pp. 684 -696.
  8. ESPIAU, B., F. CHAUMETTE, P. RIVES, A New Approach to Visual Servoing in Robotics, Robotics and Automation, IEEE Transactions on, Volume: 8 Issue 3, June 1992, pp. 313 -326.
  9. MALIS, E. F., CHAUMETTE, S. BOUDET, 2-1/2-D Visual Servoing, Robotics and Automation, IEEE Transactions on, vol. 15, no. 2, April 1999, pp. 238-250.
  10. ENS, J., P. Lawrence, An Investigation of Methods for Determining Depht from Focus, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 2, February 1993, pp. 97-107.
  11. IRVING, R. B., D. M. McKEOWN, Methods for Exploiting the Relationship Between Buildings and Their Shadows in Aerial Imagery, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, No. 6, December 1989, pp. 1564-1575.
  12. VUYLSTEKE, P., A. OOSTERLINCK, Range Image Acquisition with a Single Binary – Encoded Light Pattern, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 2, February 1990, pp. 148-164.
  13. ALVERTOS, N., D. BRZAKOVIC, Camera Geometries for Image Matching in 3-D Machine Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 11, No. 9, September 1989, pp. 897-914.
  14. LEFRANC, G., Servoing Systems: A Tutorial, IEEE International Symposium on Robotics and Automation, 2002, Toluca, México.
  15. LEFRANC, G., F. CANO, Sistema Servoing Pick and Place, Congreso Latinoamericano de Control Automático, 2002, Guadalajara, México.
  16. SCHLEYER G., G. LEFRANK, Experimental 3-D Visual servoing for FMS applications, IFAC IEEE MCPL2004, Santiago of Chile, 2004.
  17. UMBAUGH, S., Computer Vision and Image Processing, Prentice Hall PTR, NJ, 1999.
  18. LEIGTHON F., LEFRANC G., Flexible Assembly Cell using Scara Manipulator. MCPL2004, Chile.
  19. XIE Shao-Rong, LUO Jun, RAO Jin-Jun, GONG Zhen-Bang. Computer Vision-based Navigation and Predefined Track Following Control of a Small Robotic Airship. Acta Automatica Sinica, Volume 33, Issue 3, March 2007, pp. 286-291.
  20. OPASO, Marcel; Hernández Ricardo; Lefranc Gastón. Visual Servoing using Camera In Effector. IFAC MCPL´07, Sibiu, Romania, 2007.