Daniel Watzenig, Gerald Steiner, Anton Fuchs, Hubert Zangl and Bernhard Brandstätter
The investigation of the influence of the modeling error on the solution of the inverse problem given uncertain measured data in electrical capacitance tomography (ECT).
Abstract
Purpose
The investigation of the influence of the modeling error on the solution of the inverse problem given uncertain measured data in electrical capacitance tomography (ECT).
Design/methodology/approach
The solution of the nonlinear inverse problem in ECT and hence, the obtainable accuracy of the reconstruction result, highly depends on the numerical modeling of the forward map and on the required regularization. The inherent discretization error propagates through the forward map, the solution of the inverse problem, the subsequent calculation of process parameters and properties and may lead to a substantial estimation error. Within this work different finite element meshes are compared in terms of obtainable reconstruction accuracy. In order to characterize the reconstruction results, two error measures are introduced, a relative integral error and the relative error in material fraction. In addition, the influence of the measurement noise given different meshes is investigated from the statistical point of view using repeated measurements.
Findings
The modeling error, the degree of regularization, and measurement uncertainties are the determining and limiting factors for the obtainable reconstruction accuracy of electrical tomography systems. The impact of these key influence factors on the calculation of process properties given both synthetic as well as measured data is quantified. Practical implications – The obtained results show that especially for measured data, the variability in calculated parameters strongly depends on the efforts put on the forward modeling, i.e. on an appropriate finite element mesh size. Hence, an investigation of the modeling error is highly recommended when real‐world tomography problems have to be solved.
Originality/value
The results presented in this work clearly show how the modeling error as well as inherent measurement uncertainties influence the solution of the inverse problem and the posterior calculation of certain parameters like void fraction in process tomography.
Details
Keywords
Guanhua Li, Wei Dong Zhu, Huiyue Dong and Yinglin Ke
This paper aims to present error compensation based on surface reconstruction to improve the positioning accuracy of industrial robots.
Abstract
Purpose
This paper aims to present error compensation based on surface reconstruction to improve the positioning accuracy of industrial robots.
Design/methodology/approach
In previous research, it has been proved that the positioning error of industrial robots is continuous on the two-dimensional manifold of six-joint space. The point cloud generated by positioning error data can be used to fit the continuous surfaces, which makes it possible to apply surface reconstruction on error compensation. The moving least-squares interpolation and the B-spline method are used for the error surface reconstruction.
Findings
The results of experiments and simulations validate the effectiveness of error compensation by the moving least-squares interpolation and the B-spline method.
Practical implications
The proposed methods can control the average of compensated positioning error within 0.2 mm, which meets the requirement of a tolerance (±0.5 mm) for fastener hole drilling in aircraft assembly.
Originality/value
The error surface reconstruction based on the B-spline method has great superiority because fewer sample points are needed to use this method than others while keeping the compensation accuracy at the same level. The control points of the B-spline error surface can be adjusted with measured data, which can be applied for the error prediction in any temperature field.
Details
Keywords
Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar
In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…
Abstract
Purpose
In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.
Design/methodology/approach
A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.
Findings
Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.
Originality/value
This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.
Details
Keywords
Everton Boos, Fermín S.V. Bazán and Vanda M. Luchesi
This paper aims to reconstruct the spatially varying orthotropic conductivity based on a two-dimensional inverse heat conduction problem described by a partial differential…
Abstract
Purpose
This paper aims to reconstruct the spatially varying orthotropic conductivity based on a two-dimensional inverse heat conduction problem described by a partial differential equation (PDE) model with mixed boundary conditions. The proposed discretization uses a highly accurate technique and allows simple implementations. Also, the authors solve the related inverse problem in such a way that smoothness is enforced on the iterations, showing promising results in synthetic examples and real problems with moving heat source.
Design/methodology/approach
The discretization procedure applied to the model for the direct problem uses a pseudospectral collocation strategy in the spatial variables and Crank–Nicolson method for the time-dependent variable. Then, the related inverse problem of recovering the conductivity from temperature measurements is solved by a modified version of Levenberg–Marquardt method (LMM) which uses singular scaling matrices. Problems where data availability is limited are also considered, motivated by a face milling operation problem. Numerical examples are presented to indicate the accuracy and efficiency of the proposed method.
Findings
The paper presents a discretization for the PDEs model aiming on simple implementations and numerical performance. The modified version of LMM introduced using singular scaling matrices shows the capabilities on recovering quantities with precision at a low number of iterations. Numerical results showed good fit between exact and approximate solutions for synthetic noisy data and quite acceptable inverse solutions when experimental data are inverted.
Originality/value
The paper is significant because of the pseudospectral approach, known for its high precision and easy implementation, and usage of singular regularization matrices on LMM iterations, unlike classic implementations of the method, impacting positively on the reconstruction process.
Details
Keywords
The purpose of this paper is to propose a numerical procedure for discrete identification of the missing part of the domain boundary in a heat conduction problem. A new approach…
Abstract
Purpose
The purpose of this paper is to propose a numerical procedure for discrete identification of the missing part of the domain boundary in a heat conduction problem. A new approach to sensitivity analysis is intended to give a better understanding of the influence of measurement error on boundary reconstruction.
Design/methodology/approach
The solution of Laplace’s equation is obtained using the Trefftz method, and then each of the sought boundary points can be derived numerically from a nonlinear equation. The sensitivity analysis comes down to the analytical evaluation of a sensitivity factor.
Findings
The proposed method very accurately recovers the unknown boundary, including irregular shapes. Even a very large number of the boundary points can be determined without causing computational problems. The sensitivity factor provides quantitative assessment of the relationship between the temperature measurement errors and boundary identification errors. The numerical examples show that some boundary reconstruction problems are error-sensitive by nature but such problems can be recognized with the use of a sensitive factor.
Originality/value
The present approach based on the Trefftz method separates, in terms of computation, specification of the coefficients appearing in the Trefftz method and missing coordinates of the sought boundary points. Due to introducing a sensitivity factor, a more profound sensitivity analysis was successfully conducted.
Details
Keywords
Shufeng Tang, Jingfang Ji, Yun Zhi, Wei Yuan, Hong Chang, Xin Wang and Xiaodong Guo
Continuum robots offer unique advantages in various specialized environments, particularly in confined or hard-to-reach spaces. Inverse kinematics and real-time shape estimation…
Abstract
Purpose
Continuum robots offer unique advantages in various specialized environments, particularly in confined or hard-to-reach spaces. Inverse kinematics and real-time shape estimation constitute crucial aspects of closed-loop control for continuum robots, presenting challenging problems. This paper aims to present an inverse kinematics and shape reconstruction method, which relies solely on the knowledge of base and end positions and orientations.
Design/methodology/approach
Based on the constant curvature assumption, continuum robots are regarded as spatial curves composed of circular arcs. Using geometric relationships, the mathematical relationships between the arc chords, points on the bisecting plane and the coordinate axes are established. On this basis, the analytical solution of the inverse kinematics of the continuum robots is derived. Using the positions and orientations of the base and end of the continuum robots, the Levenberg–Marquardt algorithm is used to solve the positions of the cubic Bezier curves, and a new method of spatial shape reconstruction of continuum robots is proposed.
Findings
The inverse kinematics and spatial shape reconstruction simulation of the continuum robot are carried out, and the spatial shape measurement experimental platform for the continuum robot is constructed to compare the measured and reconstructed spatial shapes. The results show that the maximum relative error between the actual shape and the reconstructed shape of the continuum robot is 2.08%, which verifies the inverse kinematics and shape reconstruction model. Additionally, when the bending angle of a single bending section of the continuum robot is less than 135°, the shape reconstruction accuracy is higher.
Originality/value
The proposed inverse kinematics solution method avoids iterative solving, and the shape reconstruction model does not rely on mechanical models. It has the advantages of being simple to solve, highly accurate and fast in computation, making it suitable for real-time control of continuum robots.
Details
Keywords
J. Irša and A.N. Galybin
The purpose of this paper is to consider reconstructions of potential 2D fields from discrete measurements. Two potential processes are addressed, steady flow and heat conduction…
Abstract
Purpose
The purpose of this paper is to consider reconstructions of potential 2D fields from discrete measurements. Two potential processes are addressed, steady flow and heat conduction. In the first case, the flow speed and streamlines are determined from the discrete data on flow directions, in the second case, the temperature and flux are recovered from temperature measurements at discrete points.
Design/methodology/approach
The method employs the Trefftz element principle and the collocation. The domain is seen as a combination of elements, where the solution is sought as a linear holomorphic function a priori satisfying the governing equations. Continuity of piecewise holomorphic functions is imposed at collocation points located on the element interfaces. These form the first group of equations. The second group of equations is formed by addressing the measured data, therefore the matrix coefficients may reflect experimental errors. In the case of fluid flow, all equations are homogeneous, therefore one normalising equation is added, which provides existence of a non‐trivial solution. The system is over‐determined; it is solved by the least squares method.
Findings
For the heat flow problem, the determination of heat flux is unique, while for the fluid flow, the determined streamlines are unique and the determination of speed contains one free multiplicative positive constant. Several examples are presented to illustrate the methods and investigate their efficiency and sensitivity to noisy data.
Research limitations/implications
The approach can be applied to other 2D potential problems.
Originality/value
The paper studies two novel formulations of the reconstruction problem for 2D potential fields. It is shown that the suggested numerical method is able to deal directly with discrete experimental data.
Details
Keywords
Qingxian Jia, Huayi Li, Xueqin Chen and Yingchun Zhang
The purpose of this paper is to achieve fault reconstruction for reaction wheels in spacecraft attitude control systems (ACSs) subject to space disturbance torques.
Abstract
Purpose
The purpose of this paper is to achieve fault reconstruction for reaction wheels in spacecraft attitude control systems (ACSs) subject to space disturbance torques.
Design/methodology/approach
Considering the influence of rotating reaction wheels on spacecraft attitude dynamics, a novel non-linear learning observer is suggested to robustly reconstruct the loss of reaction wheel effectiveness faults, and its stability is proven using Lyapunov’s indirect method. Further, an extension of the proposed approach to bias faults reconstruction for reaction wheels in spacecraft ACSs is performed.
Findings
The numerical example and simulation demonstrate the effectiveness of the proposed fault-reconstructing methods.
Practical implications
This paper includes implications for the development of reliability and survivability of on-orbit spacecrafts.
Originality/value
This paper proposes a novel non-linear learning observer-based reaction wheels fault reconstruction for spacecraft ACSs.
Details
Keywords
Abstract
Purpose
As manufacturing technology has developed, digital models from advanced measuring devices have been widely used in manufacturing sectors. To speed up the production cycle and reduce extra errors introduced in surface reconstruction processes, directly machining digital models in the polygonal stereolithographyformat has been considered as an effective approach in rapid digital manufacturing. In machining processes, Cutter Location (CL) data for numerical control (NC) machining is generated usually from an offset model. This model is created by offsetting each vertex of the original model along its vertex vector. However, this method has the drawback of overcut to the offset model. The purpose of this paper is to solve the overcut problem through an error compensation algorithm to the vertex offset model.
Design/methodology/approach
Based on the analysis of the vertex offset method and the offset model generated, the authors developed and implemented an error compensation method to correct the offset models and generated the accurate CL data for the subsequent machining process. This error compensation method is verified through three polygonal models and the tool paths generated were used for a real part machining.
Findings
Based on the analysis of the vertex offset method and the offset model generated, the authors developed an error compensation method to correct the offset models and generated the accurate CL data for the subsequent machining process. The developed error compensation algorithm can effectively solve the overcut drawback of the vertex offset method.
Research limitations/implications
The error compensation method to the vertex offset model is used for generating the CL data with the using of a ball-end cutter.
Practical implications
On the study of CL data generation for a STL model, most of the current studies are focused on the determination of the offset vectors of the vertexes. The offset distance is usually fixed to the radius of the cutter used. Thus, the overcut problem to the offset model is inevitable and has not been much studied. The authors propose an effective approach to compensate the insufficient distance of the offset vertex and solve the overcut problem.
Social implications
The directly tool paths generation from a STL model can reduce the error of surface reconstruction and speed up the machining progress.
Originality/value
The authors investigate the overcut problem occurred in vertex offset for CL data generation and present a new error compensation algorithm for generating the CL data that can effectively solve the overcut problem.
Details
Keywords
Junfu Chen, Xiaodong Zhao and Dechang Pi
The purpose of this paper is to ensure the stable operation of satellites in orbit and to assist ground personnel in continuously monitoring the satellite telemetry data and…
Abstract
Purpose
The purpose of this paper is to ensure the stable operation of satellites in orbit and to assist ground personnel in continuously monitoring the satellite telemetry data and finding anomalies in advance, which can improve the reliability of satellite operation and prevent catastrophic losses.
Design/methodology/approach
This paper proposes a deep auto-encoder (DAE) satellite anomaly advance warning framework for satellite telemetry data. Firstly, this study performs grey correlation analysis, extracts important feature attributes to construct feature vectors and builds the variational auto-encoder with bidirectional long short-term memory generative adversarial network discriminator (VAE/BLGAN). Then, the Mahalanobis distance is used to measure the reconstruction score of input and output. According to the periodic characteristic of satellite operation, a dynamic threshold method based on periodic time window is proposed. Satellite health monitoring and advance warning are achieved using reconstruction scores and dynamic thresholds.
Findings
Experiment results indicate DAE methods can probe that satellite telemetry data appear abnormal, trigger a warning before the anomaly occurring and thus allow enough time for troubleshooting. This paper further verifies that the proposed VAE/BLGAN model has stronger data learning ability than other two auto-encoder models and is sensitive to satellite monitoring data.
Originality/value
This paper provides a DAE framework to apply in the field of satellite health monitoring and anomaly advance warning. To the best of the authors’ knowledge, this is the first paper to combine DAE methods with satellite anomaly detection, which can promote the application of artificial intelligence in spacecraft health monitoring.