Search results
1 – 10 of 64Jinghan Du, Haiyan Chen and Weining Zhang
In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its…
Abstract
Purpose
In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks.
Design/methodology/approach
Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network.
Findings
This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness.
Originality/value
A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.
Details
Keywords
Kam C. Chan, Feida Zhang and Weining Zhang
The purpose of this paper is to study the relationship between institutional holdings and analyst coverage in the context of the heterogeneous nature of institutional investors.
Abstract
Purpose
The purpose of this paper is to study the relationship between institutional holdings and analyst coverage in the context of the heterogeneous nature of institutional investors.
Design/methodology/approach
Similar to prior studies (e.g. Ke and Ramalingegowda; Ramalingegowda and Yu), this paper obtains institutional investors' trading classifications (transient, dedicated, and quasi‐indexing) from Brian Bushee directly. To examine the hypotheses, the paper uses a two‐step instrumental variable approach demonstrated in O'Brien and Bhushan to mitigate the simultaneity relationship between the change in analyst coverage and the change in the number of heterogeneous institutional investors.
Findings
The findings suggest that such relations are different among transient, dedicated, and quasi‐indexing institutional investors. Specifically, there are three major results. First, a change to the number of analyst coverage has the lowest impact on the change in the number of dedicated institutional investors. Second, a change in the number of transient institutional investors has a higher impact on change in analyst coverage than those for change in the number of dedicated and quasi‐indexing institutional investors. Third, changes to analysts' buy or sell recommendations have the least impact on the change in the number of dedicated institutions, relative to transient and quasi‐indexing institutions.
Research limitations/implications
The findings suggest that institutional investors are not homogeneous. Research studies on institutional investors need to disentangle the differences among different types of institutions.
Originality/value
The paper provides a comprehensive study on different institutional investors and analyst coverage. The findings show the complex nature of the interaction between institutional investors and analyst coverage.
Details
Keywords
The existing literature documents mixed evidence toward the association between corporate social responsibility (CSR) and corporate tax planning (e.g., Davis, Guenther, Krull, &…
Abstract
The existing literature documents mixed evidence toward the association between corporate social responsibility (CSR) and corporate tax planning (e.g., Davis, Guenther, Krull, & Williams, 2016; Hoi, Wu, & Zhang, 2013). In this study, I aim to identify a causal relationship between CSR and tax planning, leveraging the staggered adoptions of constituency statutes in US states, which is a plausibly exogenous shock to firms' emphasis on their social responsibility. In general, the statutes permit firm directors to consider the interests of all constituents when making business decisions, including those who benefit from firms paying their fair share of income taxes. Thus, the adoption of the statutes raises the importance of firms' social responsibility in paying income taxes. Employing a staggered difference-in-differences (DiD) method, I find that firms incorporated in states that have adopted constituency statutes exhibit significantly higher effective tax rates (ETRs) based on current tax expense. This causal relationship suggests that managers, with the legitimacy to consider the social impact of tax avoidance, become less aggressive in tax planning. I further find that the effect of adoption is stronger for financially unconstrained firms and firms in retail businesses, where the demand (cost) for tax avoidance is lower (higher). Finally, I show that my main results are driven by firms located in states with a high sense of social responsibility and firms with high levels of tax avoidance prior to the adoption. Overall, the findings in this chapter contribute to the literature by delineating a negative causal relationship between CSR and tax avoidance and identifying a positive social impact brought by the passage of constituency legislation.
Details
Keywords
Wei Zhang, Xianghong Hua, Kegen Yu, Weining Qiu, Shoujian Zhang and Xiaoxing He
This paper aims to introduce the weighted squared Euclidean distance between points in signal space, to improve the performance of the Wi-Fi indoor positioning. Nowadays, the…
Abstract
Purpose
This paper aims to introduce the weighted squared Euclidean distance between points in signal space, to improve the performance of the Wi-Fi indoor positioning. Nowadays, the received signal strength-based Wi-Fi indoor positioning, a low-cost indoor positioning approach, has attracted a significant attention from both academia and industry.
Design/methodology/approach
The local principal gradient direction is introduced and used to define the weighting function and an average algorithm based on k-means algorithm is used to estimate the local principal gradient direction of each access point. Then, correlation distance is used in the new method to find the k nearest calibration points. The weighted squared Euclidean distance between the nearest calibration point and target point is calculated and used to estimate the position of target point.
Findings
Experiments are conducted and the results indicate that the proposed Wi-Fi indoor positioning approach considerably outperforms the weighted k nearest neighbor method. The new method also outperforms support vector regression and extreme learning machine algorithms in the absence of sufficient fingerprints.
Research limitations/implications
Weighted k nearest neighbor approach, support vector regression algorithm and extreme learning machine algorithm are the three classic strategies for location determination using Wi-Fi fingerprinting. However, weighted k nearest neighbor suffers from dramatic performance degradation in the presence of multipath signal attenuation and environmental changes. More fingerprints are required for support vector regression algorithm to ensure the desirable performance; and labeling Wi-Fi fingerprints is labor-intensive. The performance of extreme learning machine algorithm may not be stable.
Practical implications
The new weighted squared Euclidean distance-based Wi-Fi indoor positioning strategy can improve the performance of Wi-Fi indoor positioning system.
Social implications
The received signal strength-based effective Wi-Fi indoor positioning system can substitute for global positioning system that does not work indoors. This effective and low-cost positioning approach would be promising for many indoor-based location services.
Originality/value
A novel Wi-Fi indoor positioning strategy based on the weighted squared Euclidean distance is proposed in this paper to improve the performance of the Wi-Fi indoor positioning, and the local principal gradient direction is introduced and used to define the weighting function.
Details
Keywords
Wei Zhang, Xianghong Hua, Kegen Yu, Weining Qiu, Xin Chang, Bang Wu and Xijiang Chen
Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve…
Abstract
Purpose
Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve the performance of WiFi indoor positioning based on RSS, this paper aims to propose a novel position estimation strategy which is called radius-based domain clustering (RDC). This domain clustering technology aims to avoid the issue of access point (AP) selection.
Design/methodology/approach
The proposed positioning approach uses each individual AP of all available APs to estimate the position of target point. Then, according to circular error probable, the authors search the decision domain which has the 50 per cent of the intermediate position estimates and minimize the radius of a circle via a RDC algorithm. The final estimate of the position of target point is obtained by averaging intermediate position estimates in the decision domain.
Findings
Experiments are conducted, and comparison between the different position estimation strategies demonstrates that the new method has a better location estimation accuracy and reliability.
Research limitations/implications
Weighted k nearest neighbor approach and Naive Bayes Classifier method are two classic position estimation strategies for location determination using WiFi fingerprinting. Both of the two strategies are affected by AP selection strategies and inappropriate selection of APs may degrade positioning performance considerably.
Practical implications
The RDC positioning approach can improve the performance of WiFi indoor positioning, and the issue of AP selection and related drawbacks is avoided.
Social implications
The RSS-based effective WiFi indoor positioning system can makes up for the indoor positioning weaknesses of global navigation satellite system. Many indoor location-based services can be encouraged with the effective and low-cost positioning technology.
Originality/value
A novel position estimation strategy is introduced to avoid the AP selection problem in RSS-based WiFi indoor positioning technology, and the domain clustering technology is proposed to obtain a better accuracy and reliability.
Details
Keywords
Weining Qi, Hongyi Yu, Jinya Yang and Xia Zhang
Abstract‐CEDAR protocol is a distributed routing protocol oriented to Quality of Service (QoS) support in MANET, and bandwidth is the QoS parameter of interest in this protocol…
Abstract
Abstract‐CEDAR protocol is a distributed routing protocol oriented to Quality of Service (QoS) support in MANET, and bandwidth is the QoS parameter of interest in this protocol. However, without energy efficiency consideration, earlier node failure will occur in overloaded nodes in CEDAR, and in turn may lead to network partitioning and reduced network lifetime. The storage and processing overhead of CEDAR is fairly high because too many kinds of control packets are exchanged between nodes and too much state information needs to be maintained by core nodes. The routing algorithm depends fully on the link state information known by core nodes. But the link state information may be imprecise, which will result in route failures. In this paper, we present an improved energy efficient CEDAR protocol, and propose a new efficient method of bandwidth calculation. Simulation results show that the improved CEDAR is efficient in terms of packet delivery ratio, throughput and mean‐square error of energy.
Details
Keywords
He Huang, Weining Wang and Yujie Yin
This study aims to focus on the clothing recycling supply chain and aims to provide optimal decisions and managerial insights into supply chain strategies, thereby facilitating…
Abstract
Purpose
This study aims to focus on the clothing recycling supply chain and aims to provide optimal decisions and managerial insights into supply chain strategies, thereby facilitating the sustainable development of the clothing industry.
Design/methodology/approach
Based on previous single- and dual-channel studies, game theory was employed to analyze multiple recycling channels. Concurrently, clothing consumer types were integrated into the analytical models to observe their impact on supply chain strategies. Three market scenarios were modeled for comparative analysis, and numerical experiments were conducted.
Findings
The intervention of fashion retailers in the clothing recycling market has intensified competition across the entire market. The proportions of various consumer types, their preferences for online platforms and their preference for the retailer’s channel influence the optimal decisions and profits of supply chain members. The diversity of recycling channels may enhance the recycling volume of clothes; however, it should meet certain conditions.
Originality/value
This study extends the existing theory from a channel dimension by exploring multiple channels. Furthermore, by investigating the classifications of clothing consumers and their influence on supply chain strategies, the theory is enhanced from the consumer perspective.
Details
Keywords
Sridevi P, Saikiran Niduthavolu and Lakshmi Narasimhan Vedanthachari
The purpose of this paper is to design organization message content strategies and analyse their information diffusion on the microblogging website, Twitter.
Abstract
Purpose
The purpose of this paper is to design organization message content strategies and analyse their information diffusion on the microblogging website, Twitter.
Design/methodology/approach
Using data from 29 brands and 9392 tweets, message strategies on twitter are classified into four strategies. Using content analysis all the tweets are classified into informational strategy, transformational strategy, interactional strategy and promotional strategy. Additionally, the information diffusion for the developed message strategies was explored. Furthermore, message content features such as text readability features, language features, Twitter-specific features, vividness features on information diffusion are analysed across message strategies. Additionally, the interaction between message strategies and message features was carried out.
Findings
Finding reveals that informational strategies were the dominant message strategy on Twitter. The influence of text readability features language features, Twitter-specific features, vividness features that influenced information diffusion varied across four message strategies.
Originality/value
This study offers a completely novel way for effectively analysing information diffusion for branded tweets on Twitter and can show a path to both researchers and practitioners for the development of successful social media marketing strategies.
Details
Keywords
This paper aims to examine the time it would take to provide medical prophylaxis for a large urban population in the wake of an airborne anthrax attack and the effect that various…
Abstract
Purpose
This paper aims to examine the time it would take to provide medical prophylaxis for a large urban population in the wake of an airborne anthrax attack and the effect that various parameters have on the total logistical time.
Design/methodology/approach
A mathematical model that evaluates key parameters and suggests alternatives for improvement is formulated. The objective of the model is to minimize the total logistical time required for prophylaxis by balancing three cycles as follows: the loading cycle, the shipping cycle and the service cycle.
Findings
Applying the model to two representative cases reveals the effect of various parameters on the process. For example, the number of distribution centers and the number of servers in each center are key parameters, whereas the number of central depots and the local shipping method is less important.
Research limitations/implications
Various psychological factors such as mass panic are not included in the model.
Originality/value
There are few papers analyzing the logistical response to an anthrax attack, and most focus mainly on the strategic level. The study deals with the tactical logistical level. The authors focus on the distribution process of prophylaxis and other medical supplies during the crisis, analyze it and identify the parameters that influence the time between the detection of the attack and the provision of effective medical treatment to the exposed population.
Details
Keywords
Baixi Chen, Weining Mao, Yangsheng Lin, Wenqian Ma and Nan Hu
Fused deposition modeling (FDM) is an extensively used additive manufacturing method with the capacity to build complex functional components. Due to the machinery and…
Abstract
Purpose
Fused deposition modeling (FDM) is an extensively used additive manufacturing method with the capacity to build complex functional components. Due to the machinery and environmental factors during manufacturing, the FDM parts inevitably demonstrated uncertainty in properties and performance. This study aims to identify the stochastic constitutive behaviors of FDM-fabricated polylactic acid (PLA) tensile specimens induced by the manufacturing process.
Design/methodology/approach
By conducting the tensile test, the effects of the printing machine selection and three major manufacturing parameters (i.e., printing speed S, nozzle temperature T and layer thickness t) on the stochastic constitutive behaviors were investigated. The influence of the loading rate was also explained. In addition, the data-driven models were established to quantify and optimize the uncertain mechanical behaviors of FDM-based tensile specimens under various printing parameters.
Findings
As indicated by the results, the uncertain behaviors of the stiffness and strength of the PLA tensile specimens were dominated by the printing speed and nozzle temperature, respectively. The manufacturing-induced stochastic constitutive behaviors could be accurately captured by the developed data-driven model with the R2 over 0.98 on the testing dataset. The optimal parameters obtained from the data-driven framework were T = 231.3595 °C, S = 40.3179 mm/min and t = 0.2343 mm, which were in good agreement with the experiments.
Practical implications
The developed data-driven models can also be integrated into the design and characterization of parts fabricated by extrusion and other additive manufacturing technologies.
Originality/value
Stochastic behaviors of additively manufactured products were revealed by considering extensive manufacturing factors. The data-driven models were proposed to facilitate the description and optimization of the FDM products and control their quality.
Details