Hubert Zangl and Stephan Mühlbacher-Karrer
The purpose of this paper is to reduce the artifacts in fast Bayesian reconstruction images in electrical tomography. This is in particular important with respect to object…
Abstract
Purpose
The purpose of this paper is to reduce the artifacts in fast Bayesian reconstruction images in electrical tomography. This is in particular important with respect to object detection in electrical tomography applications.
Design/methodology/approach
The authors suggest to apply the Box-Cox transformation in Bayesian linear minimum mean square error (BMMSE) reconstruction to better accommodate the non-linear relation between the capacitance matrix and the permittivity distribution. The authors compare the results of the original algorithm with the modified algorithm and with the ground truth in both, simulation and experiments.
Findings
The results show a reduction of 50 percent of the mean square error caused by artifacts in low permittivity regions. Furthermore, the algorithm does not increase the computational complexity significantly such that the hard real time constraints can still be met. The authors demonstrate that the algorithm also works with limited observations angles. This allows for object detection in real time, e.g., in robot collision avoidance.
Originality/value
This paper shows that the extension of BMMSE by applying the Box-Cox transformation leads to a significant improvement of the quality of the reconstruction image while hard real time constraints are still met.
Sami Barmada, Alessandro Formisano, Dimitri Thomopulos and Mauro Tucci
This study aims to investigate the possible use of a deep neural network (DNN) as an inverse solver.
Abstract
Purpose
This study aims to investigate the possible use of a deep neural network (DNN) as an inverse solver.
Design/methodology/approach
Different models based on DNNs are designed and proposed for the resolution of inverse electromagnetic problems either as fast solvers for the direct problem or as straightforward inverse problem solvers, with reference to the TEAM 25 benchmark problem for the sake of exemplification.
Findings
Using DNNs as straightforward inverse problem solvers has relevant advantages in terms of promptness but requires a careful treatment of the underlying problem ill-posedness.
Originality/value
This work is one of the first attempts to exploit DNNs for inverse problem resolution in low-frequency electromagnetism. Results on the TEAM 25 test problem show the potential effectiveness of the approach but also highlight the need for a careful choice of the training data set.
Details
Keywords
Olli Väänänen and Timo Hämäläinen
Minimizing the energy consumption in a wireless sensor node is important for lengthening the lifetime of a battery. Radio transmission is the most energy-consuming task in a…
Abstract
Purpose
Minimizing the energy consumption in a wireless sensor node is important for lengthening the lifetime of a battery. Radio transmission is the most energy-consuming task in a wireless sensor node, and by compressing the sensor data in the online mode, it is possible to reduce the number of transmission periods. This study aims to demonstrate that temporal compression methods present an effective method for lengthening the lifetime of a battery-powered wireless sensor node.
Design/methodology/approach
In this study, the energy consumption of LoRa-based sensor node was evaluated and measured. The experiments were conducted with different LoRaWAN data rate parameters, with and without compression algorithms implemented to compress sensor data in the online mode. The effect of temporal compression algorithms on the overall energy consumption was measured.
Findings
Energy consumption was measured with different LoRaWAN spreading factors. The LoRaWAN transmission energy consumption significantly depends on the spreading factor used. The other significant factors affecting the LoRa-based sensor node energy consumption are the measurement interval and sleep mode current consumption. The results show that temporal compression algorithms are an effective method for reducing the energy consumption of a LoRa sensor node by reducing the number of LoRa transmission periods.
Originality/value
This paper presents with a practical case that it is possible to reduce the overall energy consumption of a wireless sensor node by compressing sensor data in online mode with simple temporal compression algorithms.
Details
Keywords
Wanru Xie, Yixin Zhao, Gang Zhao, Fei Yang, Zilong Wei and Jinzhao Liu
High-speed turnouts are more complex in structure and thus may cause abnormal vibration of high-speed train car body, affecting driving safety and passenger riding experience…
Abstract
Purpose
High-speed turnouts are more complex in structure and thus may cause abnormal vibration of high-speed train car body, affecting driving safety and passenger riding experience. Therefore, it is necessary to analyze the data characteristics of continuous hunting of high-speed trains passing through turnouts and propose a diagnostic method for engineering applications.
Design/methodology/approach
First, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) is performed to determine the first characteristic component of the car body’s lateral acceleration. Then, the Short-Time Fourier Transform (STFT) is performed to calculate the marginal spectra. Finally, the presence of a continuous hunting problem is determined based on the results of the comparison calculations and diagnostic thresholds. To improve computational efficiency, permutation entropy (PE) is used as a fast indicator to identify turnouts with potential problems.
Findings
Under continuous hunting conditions, the PE is less than 0.90; the ratio of the maximum peak value of the signal component to the original signal peak value exceeded 0.7, and there is an energy band in the STFT time-frequency map, which corresponds to a frequency distribution range of 1–2 Hz.
Originality/value
The research results have revealed the lateral vibration characteristics of the high-speed train’s car body during continuous hunting when passing through turnouts. On this basis, an effective diagnostic method has been proposed. With a focus on practical engineering applications, a rapid screening index for identifying potential issues has been proposed, significantly enhancing the efficiency of diagnostic processes.
Details
Keywords
Kai Zheng, Xianjun Yang, Yilei Wang, Yingjie Wu and Xianghan Zheng
The purpose of this paper is to alleviate the problem of poor robustness and over-fitting caused by large-scale data in collaborative filtering recommendation algorithms.
Abstract
Purpose
The purpose of this paper is to alleviate the problem of poor robustness and over-fitting caused by large-scale data in collaborative filtering recommendation algorithms.
Design/methodology/approach
Interpreting user behavior from the probabilistic perspective of hidden variables is helpful to improve robustness and over-fitting problems. Constructing a recommendation network by variational inference can effectively solve the complex distribution calculation in the probabilistic recommendation model. Based on the aforementioned analysis, this paper uses variational auto-encoder to construct a generating network, which can restore user-rating data to solve the problem of poor robustness and over-fitting caused by large-scale data. Meanwhile, for the existing KL-vanishing problem in the variational inference deep learning model, this paper optimizes the model by the KL annealing and Free Bits methods.
Findings
The effect of the basic model is considerably improved after using the KL annealing or Free Bits method to solve KL vanishing. The proposed models evidently perform worse than competitors on small data sets, such as MovieLens 1 M. By contrast, they have better effects on large data sets such as MovieLens 10 M and MovieLens 20 M.
Originality/value
This paper presents the usage of the variational inference model for collaborative filtering recommendation and introduces the KL annealing and Free Bits methods to improve the basic model effect. Because the variational inference training denotes the probability distribution of the hidden vector, the problem of poor robustness and overfitting is alleviated. When the amount of data is relatively large in the actual application scenario, the probability distribution of the fitted actual data can better represent the user and the item. Therefore, using variational inference for collaborative filtering recommendation is of practical value.
Details
Keywords
Transaction cost becomes significant when one holds many securities in a large portfolio where capital allocations are frequently rebalanced due to variations in non-stationary…
Abstract
Purpose
Transaction cost becomes significant when one holds many securities in a large portfolio where capital allocations are frequently rebalanced due to variations in non-stationary statistical characteristics of the asset returns. The purpose of this paper is to employ a sparsing method to sparse the eigenportfolios, so that the transaction cost can be reduced and without any loss of its performance.
Design/methodology/approach
In this paper, the authors have designed pdf-optimized mid-tread Lloyd-Max quantizers based on the distribution of each eigenportfolio, and then employed them to sparse the eigenportfolios, so those small size orders may usually be ignored (sparsed), as the result, the trading costs have been reduced.
Findings
The authors find that the sparsing technique addressed in this paper is methodic, easy to implement for large size portfolios and it offers significant reduction in transaction cost without any loss of performance.
Originality/value
In this paper, the authors investigated the performance the sparsed eigenportfolios of stock returns in S&P500 Index. It is shown that the sparsing method is simple to implement and it provides high levels of sparsity without causing PNL loss. Therefore, transaction cost of managing a large size portfolio is reduced by employing such an efficient sparsity method.
Details
Keywords
Shuangxi Huang, Zhixuan Jia, Yushun Fan, Taiwen Feng, Ting He, Shizhen Bai and Zhiyong Wu
The purpose of this paper is to better understand and study the architecture and system characteristics of the underlying support platform for crowd system, by recognizing the…
Abstract
Purpose
The purpose of this paper is to better understand and study the architecture and system characteristics of the underlying support platform for crowd system, by recognizing the characteristics of service internet is similar to the coordination characteristics between the massive units in the underlying platform of crowd system and studying the form, nature and guidelines of the service internet.
Design/methodology/approach
This paper points out the connection between the underlying support platform of crowd system and service internet, describes the framework and ideas for researching service internet and then proposes key technologies and solutions for service internet architecture and system characteristics.
Findings
The research unit in the underlying support platform of crowd system can be regarded as a service unit. Therefore, the platform can also be regarded as service internet to some extent. The ideas and technical approaches for the study of service internet’s form, criteria and characteristics are also provided.
Originality/value
According to this paper, relevant staff can be guided to better build the underlying support platform of crowd system. And it can provide a highly robust and sustainable platform for research studies of crowd science and engineering in the future.
Details
Keywords
Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…
Abstract
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.
Details
Keywords
En-Ze Rui, Guang-Zhi Zeng, Yi-Qing Ni, Zheng-Wei Chen and Shuo Hao
Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural…
Abstract
Purpose
Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural network (PINN), which was proposed to encode physical laws into neural networks, is a less data-demanding approach for flow field reconstruction. However, when the fluid physics is complex, it is tricky to obtain accurate solutions under the PINN framework. This study aims to propose a physics-based data-driven approach for time-averaged flow field reconstruction which can overcome the hurdles of the above methods.
Design/methodology/approach
A multifidelity strategy leveraging PINN and a nonlinear information fusion (NIF) algorithm is proposed. Plentiful low-fidelity data are generated from the predictions of a PINN which is constructed purely using Reynold-averaged Navier–Stokes equations, while sparse high-fidelity data are obtained by field or experimental measurements. The NIF algorithm is performed to elicit a multifidelity model, which blends the nonlinear cross-correlation information between low- and high-fidelity data.
Findings
Two experimental cases are used to verify the capability and efficacy of the proposed strategy through comparison with other widely used strategies. It is revealed that the missing flow information within the whole computational domain can be favorably recovered by the proposed multifidelity strategy with use of sparse measurement/experimental data. The elicited multifidelity model inherits the underlying physics inherent in low-fidelity PINN predictions and rectifies the low-fidelity predictions over the whole computational domain. The proposed strategy is much superior to other contrastive strategies in terms of the accuracy of reconstruction.
Originality/value
In this study, a physics-informed data-driven strategy for time-averaged flow field reconstruction is proposed which extends the applicability of the PINN framework. In addition, embedding physical laws when training the multifidelity model leads to less data demand for model development compared to purely data-driven methods for flow field reconstruction.
Details
Keywords
Srinimalan Balakrishnan Selvakumaran and Daniel Mark Hall
The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science…
Abstract
Purpose
The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science research approach. Current methods to create digital assets by capturing the state of existing buildings can provide high accuracy but are time-consuming, expensive and difficult.
Design/methodology/approach
Using design science research, this research identifies the need for a crowdsourced and cloud-based approach to reconstruct digital building assets. The research then develops and tests a fully functional smartphone application prototype. The proposed end-to-end smartphone workflow begins with data capture and ends with user applications.
Findings
The resulting implementation can achieve a realistic three-dimensional (3D) model characterized by different typologies, minimal trade-off in accuracy and low processing costs. By crowdsourcing the images, the proposed approach can reduce costs for asset reconstruction by an estimated 93% compared to manual modeling and 80% compared to locally processed reconstruction algorithms.
Practical implications
The resulting implementation achieves “good enough” reconstruction of as-is 3D models with minimal tradeoffs in accuracy compared to automated approaches and 15× cost savings compared to a manual approach. Potential facility management use cases include the issue and information tracking, 3D mark-up and multi-model configurators.
Originality/value
Through user engagement, development, testing and validation, this work demonstrates the feasibility and impact of a novel crowdsourced and cloud-based approach for the reconstruction of digital building assets.