Monday , February 6 2023

Movement and Color Detection of a Dynamic Object: An application to a Mobile Robot

Roman Osorio
Departamento de Ingeniería de Sistemas Computacionales y Automatización, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México

Sinuhé García
Departamento de Ingeniería de Sistemas Computacionales y Automatización, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México

Mario Peña
Departamento de Ingeniería de Sistemas Computacionales y Automatización, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México

Ismael Lopez-Juarez
CINVESTAV, Grupo de Robótica y Manufactura Avanzada

Gaston Lefranc
Pontificia Universidad Católica de Valparaíso
Avda. Brasil 4950, Valparaíso Chile


The paper describes the integration of several image processing algorithms necessary to recognize a particular color and the movement of an object. The main objective is to detect the object by its color and track it by a mobile robot. Mean filter is applied to soften and sharpen the input image. Then, RGB filter is applied to calculate the center of mass and area of the object and to locate its position in a real environment to develop the robot motion. These algorithms are applied to a mobile robot, in a tested scenario, tracking an object.


Image processing, color recognition, mobile robot.

>>Full text
Roman OSORIO, Sinuhe GARCIA, Mario PENA, Ismael LOPEZ-JUAREZ, Gaston LEFRANC, Movement and Color Detection of a Dynamic Object: An application to a Mobile Robot, Studies in Informatics and Control, ISSN 1220-1766, vol. 21 (1), pp. 33-40, 2012.

1. Introduction

Imaging methodology for movement detection of dynamic objects in mobile robot applications is presented using an optical color camera and computer vision algorithms by way of integrating all elements of hardware and software as a one component.

Main objective of the developed methodology for finding dynamic objects is to integrate different computer imaging vision algorithms implementing filtering, features and center of mass extraction techniques in real time for finding and tracking moving objects.

One of the utilities of images processing is currently the implementation of vision systems in mobile robots which by natural effect include one or two cameras. If the system is stereo vision, cameras can obtain images of their environment and process them [10]. It is possible to obtain important information in the image processing; the robot actions are related according to these data. One of the applications of the image processing implementation is the follow-up of a colored object via a robot [1], [2]. There exist different methods to detect movement of a dynamic object [9], [15] – [19]. Video-based moving objects detection approach is a research area related to image processing, pattern recognition and artificial intelligence. Real-time moving object detection is the premise of the video tracking and analysis, which has great theoretical value and practical value. One of these values is to obtain a sequence of video and then motion detection and motion segmentation. It can be made by background modeling and subtraction and then processing the images to have a mask of the object to follow [9]. The shape-based approach is often insufficient especially in case of large data sets [12]. Another object recognition approach is to use color (reflectance) information. It is well known that color provides powerful information for object recognition, even in the total absence of shape information. A common recognition scheme is to represent and match images on the basis of color invariant histograms [7], [13], [14]. A work that proposes a tracking system for moving objects with specified color and motion information uses color transformation and AWUPC computation is given by [16]. Other work utilizes a framework for carrying object detection using kinematics manifold embedding and decomposable generative models by kernel map and multilinear analysis [17]. The color-based matching approach is widely in use in various areas such as object recognition, content-based image retrieval and video analysis. When the applications is that a robot follow-up of a colored object, the robot can perform several moves: forward, backward, right/left turn. Robot SRV1-blackfin camera used in this research takes images of frames resolution of 160×120, 320×240, and 640×480. By using an image filter it is possible to interpret the captured images, and to obtain relevant information for mobile robot [3]-[6]. One of the most successful algorithms is based on the Scale Invariant Feature Transform (SIFT) [21]. SIFT has been integrated into a number of commercial products, including Sony’s Aibo, Bandai’s NetTansor robots, and the visual Simultaneous Localization and Mapping (vSLAM) system [20] by Evolution Robotics. An unsupervised algorithm to learn object color and locality cues from the sparse motion information is published. First detects key frames with reliable motion cues and then estimates moving sub-objects based on these motion cues using a Markov Random Field framework. From these sub-objects, it learns an appearance model as a color Gaussian Mixture Model [19].

SIFT is a method that recognizes multiple objects and query images based on minimal training input. It’s simple to use: no specialized expertise, dataset, nor equipment is required to train new models. Discrimination between multiple learned objects is handled efficiently. It’s completely robust to changes in scale and to in-plane rotations, and it accommodates mild perspective distortions that arise from out of-plane rotations [21]. In the acquired image there are many defects and noise, but not all are common and some people have develop a real time systems for inspection, detection and tracking moving objects in production systems such as potato inspection where the potatoes are inspected (size and color) on the fly while passing on a belt conveyor [25].

A machine vision system trained to distinguish between different objects of the same class but with different characteristics uses threshold techniques for image segmentation [26]. A Neural Network and a Vector Description methodology to recognize and calculate POSE of manufacturing objects uses a Color classification method using image processing techniques [27].

A new algorithm of moving object detection is proposed. The moving object detection and orientation uses a pixel and its neighbors as an image vector to represent that pixel modeled different chrominance component pixel as a mixture of Gaussians. In order to make a full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background [18].

In this paper, the detection of moving object is made by using its color. The object is tracked utilizing a mobile robot. The Mean filter is applied to soften and sharpen the input image and the RGB filter for color detection is applied. Later on, the center of mass and area of the object is calculated. These algorithms are applied to a mobile robot, in a tested scenario, tracking an object.


  1. OSORIO, R., M. PEÑA, G. ACOSTA, C. TORRES, Clasificador de Color utilizando Procesamiento de Imágenes. X Congreso de la Asociación Chilena de Control Automático, 1992. pp. 51-56.
  2. OSORIO, R., G. AGUILAR, Reconocimiento de Trayectoria para un Robot Móvil Aplicando Teoría del Color, XII Congreso Chileno de Ingeniería Eléctrica, Temuco, Chile, 1997.
  3. OSORIO, R., F. PESSANA, Cuantificación y Reconocimiento de Formas Utilizando procesamiento de Imágenes, International Conference on Automatic Control PADI2, Piura-Perú, 1998. pp. 76-82.
  4. OSORIO, R., F. PESSANA, M. PEÑA, J. SAVAGE, Reconocimiento de Objetos Aplicados a un Sistema de Visión para Robots World Multiconference on Systemics Cybernetics and Informatics. Orlando, Florida, USA, 1998. pp. 113-117.
  5. OSORIO, R., M. PEÑA, C. SAN MARTIN, High Dynamic Range Analysis Method for Color Image Enhancement, Jornadas chilenas de computación, 2009. pp. 224-228.
  6. LEE, K. E., W. CHOE, J.-H. KWON, S. LEE. Locally Adaptive High Dynamic Range Image Reproduction Inspired by Human Visual System, Proc. SPIE 7241, 72410T (2009); doi:10.1117/12.806057.
  7. GONZALEZ, WOODS, Digital Image Processing. ISBN 9780131687288, Prentice Hall, 2008.
  8. FISHER, R., S. PERKINS, A. WALKER, E. WOLFART, Hypermedia Image Processing Reference, Published by J. Wiley & Sons, Ltd. 2003.
  9. BUGEAU, A., P. PEREZ, Detection and Segmentation of Moving Object in Highly Dynamic Scenes. IEEE Conference on Computer Vision and Pattern Recognition CVPR ’07, 2007. pp. 1-8.
  10. SCHLEYER, G., G. LEFRANC, Tridimensional Visual Servoing. Studies on Informatics and Control, Vol 18. No. 3, 2009. pp. 271-278.
  11. VERNON, D., Machine Vision, Chapter 4, Prentice-Hall, 1991.
  12. GEVERS, TH. A. W. M. SMEULDERS, Image Indexing using Composite Color and Shape Invariant Features, Int. Conf on Computer Vision, Bombay, India, 1998. pp. 234-238
  13. SWAIN, M. J., D. H. BALLARD, Color Indexing, International Journal of Computer Vision, Vol. 7, No. 1, 1991. pp. 11-32.
  14. FUNT, B. V., G. D. FINLAYSON, Color Constant Color Indexing, IEEE PAMI, 17(5), 1995, pp. 522-529.
  15. LIPTON, A. J., H. FUJIYOSHI, R. S. PATIL, Moving Target Classification and Tracking. Proceedings of Fourth IEEE Workshop on Applications of Computer Vision WACV ’98, 1998. pp 8-14.
  16. KIM, S., S. LEE, S. KIM, J. LEE, Object Tracking of Mobile Robot using Moving Color and Shape Information for the aged walking. International Journal of Advanced Science and Technology, Vol. 3, 2009, pp 293-297.
  17. LIU, F., M. GLEICHER, Learning Color and Locality Cues for Moving Object Detection and Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 320-327.
  18. FANG, X. H., W. XIONG, B. J. HU, L. T. WANG, A Moving Object Detection Algorithm Based on Color Information, Journal of Physics: Conference Series Vol. 48, 2006, pp 384-387.
  19. DONG L., L. XI, Monocular-vision-based Study on Moving Object Detection and Tracking, 4th International Conference on New Trends in Information Science and Service Science (NISS), 2010, pp. 24-29.
  20. KARLSSON, N., E. DI BERNARDO, J. OSTROWSKI, L. GONCALVES, P. PIRJANIAN, M. E., MUNICH, The vSLAM Algorithm for Robust Localization and Mapping. IEEE International, Conference on Robotics and Automation, Proceedings, 2005. pp. 24-29,
  21. LOWE, D. G., Object Recognition from Local Scale-Invariant Features. Proceedings of the Seventh IEEE International Conference on Computer Vision, 2, 1999, pp. 1150-1157.
  22. BROSNAN, T. D.-W. SUN, Improving Quality Inspection of Food Products by Computer Vision – A Review, Journal of Food Engineering, v 61, n 1, 2004, pp. 3-16.
  23. TOPOLESKI, L, D., Hort 220 Vegetable Identification, Yard & Garden Line News, Volume 2 Number 6 May 1, 2000.
  24. ZHOU, L., V. CHALANA, Y. KIM, PC-based Machine Vision System for Real-Time Computer-Aided Potato Inspection, International Journal of Imaging Systems and Technology, v 9, n 6, 1998, pp. 423-433.
  25. NOORDAM, J. C., G. W. OTTEN, A. J. M. TIMMERMANS, B. H. VAN ZWOL, High Speed Potato Grading and Quality Inspection Based on a Color Vision System, Proc. SPIE Vol. 3966, Machine Vision Applications in Industrial Inspection VIII, Kenneth W. Tobin; (Editors). 2000, pp. 206-217.
  26. PEÑA-CABRERA M., I. LOPEZ-JUAREZ, R. RIOS-CABRERA, J. CORONA-CASTUERA, Machine Vision Approach for Robotic Assembly, Journal of Assembly Automation, ISSN 0144-5154, Vol. 25, No 3, 2005. pp 204-216.
  27. LEIGHTON F., R. OSORIO, G. LEFRANC, Modeling, Implementation and Application of a Flexible Manufacturing Cell, International Journal of Computers Communications & Control, ISSN 1841-9836, 6(2). 2011. pp. 278-285.