Saturday , June 23 2018

Fast Edge Detection Algorithm for Embedded Systems

Syed Usama KHALID BUKHARI1, Remus BRAD2,

1 Faculty of Engineering, “Lucian Blaga” University of Sibiu
10, Victoriei Blvd, Sibiu 550024, Romania
2 Computer Science Department, Lucian Blaga University of Sibiu
10, Victoriei Blvd, Sibiu 550024, Romania,

Abstract: The processing of images is currently moving from desktop implementation to mobile or embedded ones. In the case of automotive image processing, limited resources in memory or CPU frequency reduce the applicability of nowadays algorithms and possibility of real time processing. In this respect, we have proposed two methods for fast computation of edge detection. The accuracy and speedup were compared with some of basic methods as Canny and Sobel detectors. Also for a solid reference, the Berkeley Computer Vision Group datasets were employed as benchmarks. Good results were obtained over one hundred images from the set. In view of an embedded implementation, two platforms were used for the evaluation of the proposed methods and the references. Performing two times faster and with similar accuracy, our algorithms could have evident implementations in the growing field of embedded devices.

Keywords: edge detection; algorithm complexity; real time computation; embedded systems.

>>Full text
Syed Usama KHALID BUKHARI, Remus BRAD, Constantin BĂLĂ – ZAMFIRESCU, Fast Edge Detection Algorithm for Embedded Systems, Studies in Informatics and Control, ISSN 1220-1766, vol. 23 (2), pp. 163-170, 2014.

  1. Introduction

In the future industrial applications contextual data and knowledge will be captured and provided by embedded devices that are situated in “smart environments” [1] that tightly integrates computational, physical and social elements. This complex integration is studied under a broad range of emerging concepts, such as “ubiquitous intelligent systems” [2], “Internet of Things” [3], and/or “cyber-physical systems” [4], etc.

Among many enabling technologies for non-intrusive context-aware factory automations (e.g. global or local positioning systems, barcodes, RFID, Bluetooth, NFC, etc.), nowadays computer vision techniques are playing an increasing role in maintenance and quality management (i.e. by providing on special displays real-time repairing information in an augmented reality which is created based on the identification of objects), or even manufacturing (i.e. by giving to the human operator contextualized, step-by-step instructions, on how to execute the operations). Despite their real potential for performance enhancement in industry (see [5] for an exhaustive survey), there are very few applications that have broke out of the lab settings and are regularly used [6]. Most of these augmented reality applications are employing complex hardware (i.e. special camera systems, sensors, displays and eye-tracking devices) and powerful computing systems (i.e. smartphones, tablet computers) that are cost beneficial for very limited areas. Therefore, it is well acknowledged that augmented reality will not be widely adopted by industry as long the information will not be directly sensed and processed by embedded devices that are seamlessly immersed in the environment [5], [6], without being noticed by its potential users.

Moving the image processing applications from mobile to embedded (or even wearable) devices poses many challenging constraints in terms of limited computational capabilities and real-time processing requirements.

Edge detection is one of the basic processes related to computer vision and image processing, as edges contain a richness of information associated with image contents. Edge appears between two neighboring areas having different level of color / light intensity and draws up the boundaries between objects or objects and background.

As one of the most widely employed method of feature detection, the edge detection had a long history. Since the algorithms of Roberts [7] and Sobel [8] implemented in cellular automata, a number of new and improved methods have been developed. These techniques, including Prewitt [9], Canny [10], Laplacian of Gaussian (LOG) detectors [11], mathematical morphology [12] and wavelet transform [13], [14] were used to achieve better results.

The efficiency of algorithms is normally judged based upon correctness and speed. Classical algorithms with differential operators are easy to implement, however they are sensitive to noise [9]. Several developments have been proposed to the mentioned category in order to improve the accuracy [15]–[17], although, many other adaptive and hybrid techniques concentrate on noise reduction and the correct detection of edges [18]–[20]. A number of speed-up techniques have also been proposed in last few years, as real-time computation is considered an important factor in embedded systems applications. Therefore in [21], the LOG has been optimized to require less computation during run-time and a CUDA (Computer Unified Device Architecture) implementation of Canny algorithm offers up to 61% improvement in speed [22].

A better edge detection algorithm is required in many applications where results are crucial, for example, in the field of embedded systems, especially for automotive [27] or medical image processing purposes [28].

The paper is a step forward in this direction by investigating two methods for one of the most frequently used techniques in feature detection, requiring no kernel computation. One algorithm involves only one subtraction operation to calculate the edges, while the second uses conditional statements and does not necessitate any type of mathematical operations.

As it was proven, both methods require less computation in comparison with other well-known edge detection algorithms.

The paper is organized as follows; Section 2 presents the proposed algorithms, while the experimental results and performance comparisons are presented in Section 3. The last section concludes with potential applications for the proposed methods for edge detection.


  1. POSLAD, S., Ubiquitous Computing Smart Devices, Smart Environments and Smart Interaction, Wiley (2009).
  2. REICHLE, R., M. WAGNER, M. U. KHAN, K. GEIHS, J. LORENZO, M. VALLA, C. FRA, N. PASPALLIS, G. A. PAPADOPOULOS, A Comprehensive Context Modeling Framework for Pervasive Computing Systems, Proceedings of the 8th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, 2008, pp. 281-295.
  3. ATZORI, L., A. IERA, G. MORABITO, The Internet of Things: A survey, Computer Networks, vol. 54, no. 15, 2010, pp. 2787-2805.
  4. NIST, Foundations for Innovation in Cyber-Physical Systems, Workshop Report, Available at:, 2013.
  5. MA, D., X. FAN, J. GAUSEMEIER, M. GRAFE, Virtual Reality & Augmented Reality in Industry, Springer, 2011.
  6. FITTE-GEORGEL, P., Is There a Reality in Industrial Augmented Reality?, Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Basel, 2011.
  7. ROBERTS, L. G., Machine Perception of Three-dimensional Solids, No. TR315, degree of Doctor of Philosophy, Massachusetts Inst. of Technology, 1963.
  8. SOBEL, I., Camera Models and Machine Perception, No. AIM-121, Stanford University, California, Dept. of Computer Science, 1970.
  9. PREWITT, J. M. S., Object Enhancement and Extraction, Picture Processing and Psychopictorics, Academic Press, 1970,   pp. 75-149.
  10. CANNY, J. A., Computational Approach to Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, 1986, pp. 679-698.
  11. BASU, M., Gaussian-based Edge-detection Methods-A Survey, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 32, no. 3, 2002, pp. 252-260.
  12. LEE, J., R. HARALICK, L. SHAPIRO, Morphologic Edge Detection, IEEE Journal of Robotics and Automation, vol. 3, no. 2, 1987, vol. 142-156.
  13. MALLAT, S., S. ZHONG, Characterization of Signals from Multiscale Edges, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, 1992, pp. 710-732.
  14. XU, P., Q. MIAO, C. SHI, J. ZHANG, W. LI, An edge Detection Algorithm based on the Multi-direction Shear Transform, Journal of Visual Communication and Image Representation, vol. 23, no. 5, 2012, pp. 827-833.
  15. GAO, W., X. ZHANG, X., L. YANG, H. LIU, An Improved Sobel Edge Detection, Proceedings of the 3rd IEEE International Conference on Computer Science and Information Technology, Chengdu, China, IEEE, vol. 5, 2010.
  16. MA, C., L. YANG, W. GAO, Z. LIU, An Improved Sobel Algorithm based on Median Filter, Proceedings of the 2nd International Conference on Mechanical and Electronics Engineering, Kyoto, Japan, IEEE, vol. 1, 2010.
  17. FARAHANIRAD, H., J. SHANBEHZADEH, M. PEDRAM, A. SARRAFZADEH, A Hybrid Edge Detection Algorithm for Salt-and-pepper Noise, Proceedings of the International Multi-Conference of Engineers and Computer Scientists, Hong Kong, March, 2011, pp. 16-18.
  18. WANG, B., S. FAN, An Improved CANNY Edge Detection Algorithm, Proceedings of the Second International Workshop on Computer Science and Engineering, Xiamen, China, vol. 1, 2009, pp. 497-500.
  19. SAHU, T. L., D. A. DUBEY, Survey on Edge Detections and Denoise Techniques, International Journal of Science, Engineering and Technology Research, vol. 2, no. 1, 2013, pp. 160-166.
  20. DAFU, P., B. WANG, An improved Canny algorithm, Proceedings of the 27th Chinese Control Conference, IEEE, Kunming, Yunnan Province, China, July 2013, pp. 16-18.
  21. TORRE, V., A. P. TOMASO, On Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, 1986, pp. 147-163.
  22. OGAWA, K., Y. ITO, K. NAKANO, Efficient Canny Edge Detection using a GPU, Proceedings of First International Conference on Networking and Computing, IEEE, Hiroshima, Japan, November 2010, pp. 17-19.
  23. VORA, V. S., A. C. SUTHAR, Y. N. MAKWANA, S. J. DAVDA, Analysis of Compressed Image Quality Assessments, International Journal of Advanced Engineering & Application, vol. I&II, 2010, vol. 230-234.
  24. ARBELAEZ, P., M. MAIRE, C FOWLKES, J. MALIK, Contour Detection and Hierarchical Image Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, 2011, vol. 898-916.
  25. BADDELEY, A. J., An Error Metric for Binary Images, in Robust Computer Vision: Quality of Vision Algorithms, W. Forstner, S. Ruwiedel Eds., Wichmann Verlag, Karlsruhe, 1992, pp. 59-78.
  26. MARTIN, D., C. FOWLKES, J. MALIK, Learning to Detect Natural Image Boundaries using Local Brightness, Color, and Texture Cues, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, 2004, pp. 530-549.
  27. FAGADAR-COSMA, M., M. NOURI, V. I. CRETU, M. V. MICEA, A Combined Optical Flow and Graph Cut Approach for Foreground Extraction in Videoconference Applications, Studies in Informatics and Control, vol. 21, no. 4, 2012, pp. 413-422.
  28. KREJCAR, O., J. JIRKA, D. JANCKULIK, Use of Mobile Phones as Intelligent Sensors for Sound Input Analysis and Sleep State Detection, Sensors, vol. 11, no. 6, 2011, pp. 6037-6055.