Search results

1 – 10 of 44
Article
Publication date: 4 December 2017

Konstantinos Vassilios Kotsanopoulos and Ioannis S. Arvanitoyannis

The purpose of this paper is to analyze the results of several food safety audits carried out by the Food Standards Agency in meat and poultry-processing companies and…

Abstract

Purpose

The purpose of this paper is to analyze the results of several food safety audits carried out by the Food Standards Agency in meat and poultry-processing companies and slaughterhouses in the UK and audits in aquacultures carried out by the Aquaculture Stewardship Council and the Marine Scotland.

Design/methodology/approach

Specifically, both a quantitative and a qualitative review were carried out.

Findings

It was revealed that in meat and poultry companies, the highest average proportion of major non-conformities (MNCs) within the total number of companies was recorded in slaughterhouses, while based on the type of product, the corresponding percentage of MNCs was found in poultry companies. Both in meat/poultry companies and aquacultures, small-sized companies presented the highest average percentage of MNCs. It was also revealed that a very high percentage of MNCs and minor non-conformities were recorded in relation to “record keeping.”

Research limitations/implications

The limitations of the present study can be summarized into the fact that although a high number of audit reports were taken into account, and record keeping and past actions were reviewed as part of the audits, the audit represents only a snapshot in time and can heavily depend on the skills of the auditors.

Originality/value

To the best of the authors’ knowledge, there is very limited literature available that analyzes the results of audits and looks for trends in the food industry in the UK. The conclusions of this study can be of significant value to both the auditors and the industry by enabling a more targeted approach in the conduction of audits.

Details

British Food Journal, vol. 119 no. 12
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 15 June 2012

Grazia Cicirelli, Annalisa Milella and Donato Di Paola

The purpose of this paper is to address the use of passive RFID technology for the development of an autonomous surveillance robot. Passive RFID tags can be used for labelling…

Abstract

Purpose

The purpose of this paper is to address the use of passive RFID technology for the development of an autonomous surveillance robot. Passive RFID tags can be used for labelling both valued objects and goal‐positions that the robot has to reach in order to inspect the surroundings. In addition, the robot can use RFID tags for navigational purposes, such as to keep track of its pose in the environment. Automatic tag position estimation is, therefore, a fundamental task in this context.

Design/methodology/approach

The paper proposes a supervised fuzzy inference system to learn the RFID sensor model; Then the obtained model is used by the tag localization algorithm. Each tag position is estimated as the most likely among a set of candidate locations.

Findings

The paper proves the feasibility of RFID technology in a mobile robotics context. The development of a RFID sensor model is first required in order to provide a functional relationship between the spatial attitude of the device and its responses. Then, the RFID device provided with this model can be successfully integrated in mobile robotics applications such as navigation, mapping and surveillance, just to mention a few.

Originality/value

The paper presents a novel approach to RFID sensor modelling using adaptive neuro‐fuzzy inference. The model uses both Received Signal Strength Indication (RSSI) and tag detection event in order to achieve better accuracy. In addition, a method for global tag localization is proposed. Experimental results prove the robustness and reliability of the proposed approach.

Details

Industrial Robot: An International Journal, vol. 39 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 October 2006

A. Milella, G. Reina and M. Foglia

Aims at developing vision‐based algorithms to improve efficiency and quality in agricultural applications. Two case studies are analyzed dealing with the harvest of radicchio and…

2081

Abstract

Purpose

Aims at developing vision‐based algorithms to improve efficiency and quality in agricultural applications. Two case studies are analyzed dealing with the harvest of radicchio and the post‐harvest of fennel, respectively.

Design/methodology/approach

Presents two visual algorithms, which are called the radicchio visual localization (RVL) and fennel visual identification (FVI). The RVL serves as a detection system of radicchio plants in the field for a robotic harvester. The FVI provides information to an automated cutting device to remove the parts of fennel unfit for the market, i.e. root and leaves. Laboratory and field experiments are described to validate our approach and asses the performance of our visual modules.

Findings

Both the visual systems presented showed to be effective in experimental trials, computational efficient, accurate, and robust to noises and lighting variations. Computer vision could be successfully adopted in the intelligent and automated production of fresh market vegetables to improve quality and efficiency.

Practical implications

Provides guidance in the development of vision‐based algorithms for agricultural applications.

Originality/value

Describes visual algorithms based on intelligent morphological and color filters which lends themselves very well to agricultural applications and allow robustness and real‐time performance.

Details

Sensor Review, vol. 26 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 17 March 2014

Giulio Reina, Mauro Bellone, Luigi Spedicato and Nicola Ivan Giannoccaro

This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile…

Abstract

Purpose

This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment.

Design/methodology/approach

The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity.

Findings

The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments.

Research limitations/implications

The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes.

Originality/value

This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled.

Details

Sensor Review, vol. 34 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 10 June 2014

Du-Ming Tsai, Hao Hsu and Wei-Yao Chiu

This study aims to propose a door detection method based on the door properties in both depth and gray-level images. It can further help blind people (or mobile robots) find the…

Abstract

Purpose

This study aims to propose a door detection method based on the door properties in both depth and gray-level images. It can further help blind people (or mobile robots) find the doorway to their destination.

Design/methodology/approach

The proposed method uses the hierarchical point–line region principle with majority vote to encode the surface features pixel by pixel, and then dominant scene entities line by line, and finally the prioritized scene entities in the center, left and right of the observed scene.

Findings

This approach is very robust for noise and random misclassification in pixel, line and region levels and provides sufficient information for the pathway in the front and on the left and right of a scene. The proposed robot vision-assist system can be worn by visually impaired people or mounted on mobile robots. It provides more complete information about the surrounding environment to guide safely and effectively the user to the destination.

Originality/value

In this study, the proposed robot vision scheme provides detailed configurations of the environment encountered in daily life, including stairs (up and down), curbs/steps (up and down), obstacles, overheads, potholes/gutters, hazards and accessible ground. All these scene entities detected in the environment provide the blind people (or mobile robots) more complete information for better decision-making of their own. This paper also proposes, especially, a door detection method based on the door’s features in both depth and gray-level images. It can further help blind people find the doorway to their destination in an unfamiliar environment.

Details

Industrial Robot: An International Journal, vol. 41 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 November 2019

Marc Morenza-Cinos, Victor Casamayor-Pujol and Rafael Pous

The combination of the latest advancements in information and communication technologies with the latest developments in AutoID technologies, especially radio frequency…

2013

Abstract

Purpose

The combination of the latest advancements in information and communication technologies with the latest developments in AutoID technologies, especially radio frequency identification (RFID), brings the possibility of high-resolution, item-level visibility of the entire supply chain. In the particular case of retail, visibility of both the stock count and item location in the shop floor is crucial not only for an effective management of the retail supply chain but also for physical retail stores to compete with online retailers. The purpose of this paper is to propose an autonomous robot that can perform stock-taking using RFID for item-level identification much more accurately and efficiently than the traditional method of using human operators with RFID handheld readers.

Design/methodology/approach

This work follows the design science research methodology. The paper highlights a required improvement for an RFID inventory robot. The design hypothesis leads to a novel algorithm. Then the cycle of development and evaluation is iterated several times. Finally, conclusions are derived and a new basis for further development is provided.

Findings

An autonomous robot for stock-taking is proven feasible. By applying a proper navigation strategy, coupled to the stream of identifications, the accuracy, precision, consistency and time to complete stock-taking are significantly better than doing the same task manually.

Research limitations/implications

The main limitation of this work is the unavailability of data to analyze the actual impact on the correction of inventory record inaccuracy and its subsequent implications for the supply chain management. Nonetheless, it is shown that figures of actual stock-tacking procedures can be significantly improved.

Originality/value

This paper discloses the potential of deploying an inventory robot in the supply chain. The robot is called to be a key source of inventory data conforming supply chain management 4.0 and omnichannel retail.

Details

International Journal of Physical Distribution & Logistics Management, vol. 49 no. 10
Type: Research Article
ISSN: 0960-0035

Keywords

Article
Publication date: 31 July 2009

David Sanders

The purpose of this paper is to investigate the effect on time to complete a task depending on how a human operator interacts with a mobile‐robot. Interaction is investigated…

1168

Abstract

Purpose

The purpose of this paper is to investigate the effect on time to complete a task depending on how a human operator interacts with a mobile‐robot. Interaction is investigated using two tele‐operated mobile‐robot systems, three different ways of interacting with robots and several different environments. The speed of a tele‐operator in completing progressively more complicated driving tasks is investigated also.

Design/methodology/approach

Tele‐operators are timed completing a series of tasks using a joystick to control a mobile‐robot. They either watch the robot while operating it, or sit at a computer and view scenes remotely on a screen. Cameras are either mounted on the robot, or so that they view both the environment and robot. Tele‐operators complete tests both with and without sensors. One robot system uses an umbilical cable and one uses a radio link.

Findings

In simple environments, a tele‐operator may perform better without a sensor system to assist them but in more complicated environments then a tele‐operator may perform better with a sensor system to assist. Tele‐operators may also tend to perform better with a radio link than with an umbilical connection. Tele‐operators sometimes perform better with a camera mounted on the robot compared with pre‐mounted cameras observing the environment (but that depends on tasks being performed).

Research limitations/implications

Tele‐operated systems rely heavily on visual feedback and experienced operators. This paper investigates how to make tasks easier.

Practical implications

The paper suggests that the amount of sensor support should be varied depending on circumstances.

Originality/value

Results show that human tele‐operators perform better without the assistance of a sensor systems in simple environments.

Details

Assembly Automation, vol. 29 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 16 January 2017

Shervan Fekriershad and Farshad Tajeripour

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…

Abstract

Purpose

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.

Design/methodology/approach

One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.

Findings

The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.

Originality/value

In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.

Article
Publication date: 28 June 2011

David Sanders, Giles Tewkesbury, Ian J. Stott and David Robinson

The purpose of this paper is to investigate how to make tele‐operated tasks easier using an expert system to interpret joystick and sensor data.

Abstract

Purpose

The purpose of this paper is to investigate how to make tele‐operated tasks easier using an expert system to interpret joystick and sensor data.

Design/methodology/approach

Current tele‐operated systems tend to rely heavily on visual feedback and experienced operators. Simple expert systems improve the interaction between an operator and a tele‐operated mobile‐robot using ultrasonic sensors. Systems identify potentially hazardous situations and recommend safe courses of action. Because pairs of tests and results took place, it was possible to use a paired‐samples statistical test.

Findings

Results are presented from a series of timed tasks completed by tele‐operators using a joystick to control a mobile‐robot via an umbilical cable. Tele‐operators completed tests both with and without sensors and with and without the new expert system and using a recently published system to compare results. The t‐test was used to compare the means of the samples in the results.

Research limitations/implications

Time taken to complete a tele‐operated task with a mobile‐robot partly depends on how a human operator interacts with the mobile‐robot. Information about the environment was restricted and more effective control of the mobile‐robot could have been achieved if more information about the environment had been available, especially in tight spaces. With more information available for analysis, the central processor could have had tighter control of robot movements. Simple joysticks were used for the test and they could be replaced by more complicated haptic devices. Finally, each individual set of tests was not necessarily statistically significant so that caution was required before generalising the results.

Practical implications

The new systems described here consistently performed tasks more quickly than simple tele‐operated systems with or without sensors to assist. The paper also suggests that the amount of sensor support should be varied depending on circumstances. The paired samples test was used because people (tele‐operators) were inherently variable. Pairing removed much of that random variability. When results were analysed using a paired‐samples statistical test then results were statistically significant. The new systems described in this paper were significantly better at p<0.05 (95 per cent probability that this result would not occur by chance alone).

Originality/value

The paper shows that the new system performed every test faster on average than a recently published system used to compare the results.

Details

Sensor Review, vol. 31 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 14 October 2013

Du-Ming Tsai and Tzu-Hsun Tseng

Mobile robots become more and more important for many potential applications such as navigation and surveillance. The paper proposes an image processing scheme for moving object…

Abstract

Purpose

Mobile robots become more and more important for many potential applications such as navigation and surveillance. The paper proposes an image processing scheme for moving object detection from a mobile robot with a single camera. It especially aims at intruder detection for the security robot on either smooth paved surfaces or uneven ground surfaces.

Design/methodology/approach

The core of the proposed scheme is the template matching with basis image reconstruction for the alignment between two consecutive images in the video sequence. The most representative template patches in one image are first automatically selected based on the gradient energies in the patches. The chosen templates then form a basis matrix, and the instances of the templates in the subsequent image are matched by evaluating their reconstruction error from the basis matrix. For the two well-aligned images, a simple and fast temporal difference can thus be applied to identify moving objects from the background.

Findings

The proposed template matching can tolerate in rotation (±10°) and (±10°) in scaling. By adding templates with larger rotational angles in the basis matrixes, the proposed method can be further extended for the match of images from severe camera vibrations. Experimental results of video sequences from a non-stationary camera have shown that the proposed scheme can reliably detect moving objects from the scenes with either minor or severe geometric transformation changes. The proposed scheme can achieve a fast processing rate of 32 frames per second for an image of size 160×120.

Originality/value

The basic approaches for moving object detection with a mobile robot are feature-point match and optical flow. They are relatively computational intensive and complicated to implement for real-time applications. The proposed template selection and template matching are very fast and easy to implement. Traditional template matching methods are based on sum of squared differences or normalized cross correlation. They are very sensitive to minor displacement between two images. The proposed new similarity measure is based on the reconstruction error from the test image and its reconstruction from the linear combination of the templates. It is thus robust under rotation and scale changes. It can be well suited for mobile robot surveillance.

Details

Industrial Robot: An International Journal, vol. 40 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 44