Citation
(2012), "Embedded Visual System and its Applications on Robots", Industrial Robot, Vol. 39 No. 2. https://doi.org/10.1108/ir.2012.04939baa.012
Publisher
:Emerald Group Publishing Limited
Copyright © 2012, Emerald Group Publishing Limited
Embedded Visual System and its Applications on Robots
Article Type: Book review From: Industrial Robot: An International Journal, Volume 39, Issue 2
Edited by De XuBentham SciencePennington, NJ2010$59ISBN: 978-1-60805-166-3Web site: www.benthamscience.com/ebooks/9781608051663/index.htm
Embedded Visual System and its Applications on Robots edited by Professor De Xu provides a broad overview of embedded vision systems applied to robotics and automation. This e-book collects, in a comprehensive manner, a wealth of experiences from leading Chinese research centres and universities, addressing a variety of topics that range from hardware configuration to algorithm development and implementation. It comprises eight chapters, each one featuring thorough descriptions of hardware and software frameworks, and cost-effective algorithmic solutions corroborated by experimental demonstration and results. An extensive list of references for further discussion and insights is included at the end of every chapter.
The e-book starts with an introductory chapter, which addresses some fundamental issues of robot vision, from system configuration and calibration to measurement and control methods. Two vision system configurations are reviewed: traditional and embedded vision. The main advantages of embedded vision systems are pointed out, such as the use of smaller and lighter integrated components, and efficiency of image processing. Then, the camera calibration problem is introduced, with a focus on eye-in-hand and eye-to-hand configurations of robotic manipulators. The chapter also deals with visual measurements and visual control methods. Visual measurement methods for objects on a plane, the Perspective-n-Point (PnP) problem, and stereovision and structured light techniques are reviewed. Successively, different types of visual control methods are described, including image-based, position-based, and hybrid approaches. Insights on emerging trends of visual measurement and control, such as self-learning and human imitating methods, conclude the chapter. Chapter 2 describes in detail hardware and software design of an embedded vision system using ARM processor and CMOS image sensor for robotics applications. The compact structure, low power consumption, and high processing efficiency of the developed device are shown. An embedded vision positioning system for image capturing and visual measurement is presented in Chapter 3. The chapter starts with an introductory section reviewing the main architectures of embedded vision systems, such as single processor and multi-processor, serial and parallel schemes, and different technologies including DSP, FPGA and ASIC. Then, hardware and software design issues of a system using ARM processor and CMOS camera are analyzed. The hardware architecture consists of three main modules, i.e. the image capturing module, the image processing module, and the memory module. The software design comprises two sections, namely boot loader and application process. Camera calibration, object detection, and object positioning algorithms for mobile robot applications are presented and experimentally validated. An embedded vision system for collaborative self-localization of humanoid soccer robots is proposed in Chapter 4. First, hardware design and synchronization are discussed; then, a multi-robot self-localization method using a probabilistic approach is described. Chapter 5 presents an application of embedded vision to robotic seam tracking. A smart camera based on DSP is selected as the vision sensor, whereas a PLC is adopted as the controller. An efficient combination of image processing algorithms including adaptive thresholding, morphological operations, and Hough transform is developed to extract the seam line from the images. Details about the controller design are also provided in the last part of the chapter, followed by experimental tests conducted in a welding workshop. In Chapter 6, a high-speed vision system for table tennis robots is proposed. It features a binocular stereovision device consisting of two smart cameras, a distributed parallel processing architecture based on local area network, and effective algorithms to recognize and track the ball in the images. In Chapter 7, the problem of object recognition using local context information is addressed. Two visual object recognition approaches using, respectively, neighbour-based context and geometric context are proposed and experimentally validated. The last chapter of the e-book is concerned with structured light vision systems for applications in reverse engineering (RE) and rapid prototyping (RP). RE techniques translate 3D data of real objects into CAD models. These models are then converted into STL files and used to generate prototypes and physical models by means of RP technology.
Overall, this e-book is an interesting collection of contributions and references that, brought together, provide a good starting point for engineers, researchers, and scholars interested in getting into the emerging and exciting field of embedded vision.
Dr Annalisa MilellaInstitute of Intelligent Systems for Automation National Research Council, Bari, Italy