Search results

1 – 3 of 3
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 22 June 2012

Cihan Altuntas and Ferruh Yildiz

Laser scanning is increasingly used in many three‐dimensional (3‐D) measurement and modeling applications. It is the latest technique used in 3‐D measurement, and is becoming…

206

Abstract

Purpose

Laser scanning is increasingly used in many three‐dimensional (3‐D) measurement and modeling applications. It is the latest technique used in 3‐D measurement, and is becoming increasingly important within a number of applications. However, many applications require photogrammetric data in addition to laser scanning data. The purpose of this paper is to present a range and image sensor combination for three‐dimensional reconstruction of objects or scenes.

Design/methodology/approach

In this study, a Nikon D80 camera was mounted on an Ilris 3D laser scanner and CPP was estimated according to the laser scanner coordinate system. The estimated CPP was controlled using three different methods which were developed in this study and a sample application as coloring of point cloud using image taken by the camera mounted on the laser scanner was performed.

Findings

It was found that when a high‐resolution camera is mounted on laser scanners, camera position parameters (CPP) should be estimated very accurately with respect to the laser scanner coordinate system.

Originality/value

The paper shows that the combination of high‐resolution camera and laser scanners should be used for more accurate and efficient results in 3D modeling applications.

Access Restricted. View access options
Article
Publication date: 22 March 2013

Cihan Altuntas

The relative orientation (RO) is an important step on photogrammetric processes of stereoscopic images. The relationship between the stereoscopic images is constructed by tie…

263

Abstract

Purpose

The relative orientation (RO) is an important step on photogrammetric processes of stereoscopic images. The relationship between the stereoscopic images is constructed by tie (conjugate) points. Many automatic tie point selection methods have been introduced by photogrammetry community so far. The scale invariant feature transform (SIFT) and speeded‐up robust features (SURF) are frequently used for automatic tie point selection from stereoscopic images. However, any research has been performed related to RO errors (y‐parallaxes) on SIFT and SURF extracted tie points. The purpose of this paper is to compute errors on tie points and investigate their distributions on the model area in terms of size.

Design/methodology/approach

The experimental studies were performed on an historical building as it enables more tie points for investigation. While a couple of the stereoscopic images include rich details, the other has poor details. The image orientation and tie point selection accuracy were evaluated by root mean square and y‐parallaxes, respectively. The relationship between y‐parallaxes of tie points and their distances from centre of the images were investigated.

Findings

SIFT and SURF have a large number of tie points according to manual method. The y‐parallaxes on tie points have uniform distribution for two methods. There are relations between the precision of the SIFT and SURF keypoints and their distances from the centre of the image. Moreover, the accuracy of the RO and size of the y‐parallaxes on tie points depend on matching accuracy of the keypoints. Furthermore, although there are a few tie points that have large y‐parallax especially by the SURF, RO could be performed with high accuracy thanks to numerous tie points.

Originality/value

Stereoscopic images of close‐range photogrammetry have different scale and rotations, unlike aerial photogrammetry. Manual selection of tie points is time consuming and tedious. Furthermore, if the measurement surface has no implicit entities, enough tie points from the images cannot be selected by manually. However, tie point selection can be performed by SIFT and SURF automatically, even if there are scale, noise and rotation between the images.

Available. Content available
Article
Publication date: 19 April 2013

Magnus Ramage, David Chapman and Chris Bissell

140

Abstract

Details

Kybernetes, vol. 42 no. 4
Type: Research Article
ISSN: 0368-492X

1 – 3 of 3
Per page
102050