Search results
1 – 2 of 2Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng
The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field…
Abstract
Purpose
The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.
Design/methodology/approach
First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.
Findings
In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.
Originality/value
This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.
Details
Keywords
Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng
How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the…
Abstract
Purpose
How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the blurry image can be restored under a sequence of the linear model of the point spread function (PSF) that are derived from the 6-degree of freedom (DOF) camera’s accurate path during the long exposure time.
Design/methodology/approach
There are two existing techniques, namely, an estimation of the PSF and a blind image deconvolution. Based on online and short-period inertial measurement unit (IMU) self-calibration, this motion path has discretized a sequence of the uniform speed of 3-DOF rectilinear motion, which unites with a 3-DOF rotational motion to form a discrete 6-DOF camera’s path. These PSFs are evaluated through the discrete path, then combine with a blurry image to restoration through deconvolution.
Findings
This paper describes to build a hardware attachment, which is composed of a consumer camera, an inexpensive IMU and a 3-DOF motion mechanism to the best of the knowledge, together with experimental results demonstrating its overall effectiveness.
Originality/value
First, the paper proposes that a high-precision 6-DOF motion platform periodically adjusts the speed of a three-axis rotational motion and a three-axis rectilinear motion in a short time to compensate the bias of the gyroscope and the accelerometer. Second, this paper establishes a model of 6-DOF motion and emphasizes on rotational motion, translational motion and scene depth motion. Third, this paper addresses a novel model of the discrete path that the motion during long exposure time is discretized at a uniform speed, then to estimate a sequence of PSFs.
Details