Jianhua Ge, Xuemei Su and Yan Zhou
This paper aims to: provide theoretical analysis and empirical study on the relationship between organizational socialization and organizational citizenship behavior (OCB);…
Abstract
Purpose
This paper aims to: provide theoretical analysis and empirical study on the relationship between organizational socialization and organizational citizenship behavior (OCB); analyze the mediating role of organizational identification in their relationship; and draw from both of these to suggest practical implications to organizations aiming to effectively socialize employees, and for employees themselves.
Design/methodology/approach
First, the paper reviews the literature regarding organizational socialization, OCB and organizational identification. Second, it develops a theoretical model linking organizational socialization, organizational identification and OCB, and then proposes a series of research hypotheses. Third, drawing on samples of seven high‐tech manufacturing enterprises in China, it tests hypotheses based on a series of measurement and statistical analysis.
Findings
Organizational history, language, values and goals socialization are positively related to OCB and organizational identification. Further, organizational identification fully mediates the relationship between language, values and goals socialization and OCB, and partially mediates the relationship between history socialization and OCB.
Research limitations/implications
The cross‐sectional design prevented the making of causal statements. Data are from employees' self‐report, giving rise to concern about possible common source bias.
Originality/value
The paper explores the relationships between organizational socialization and OCB, and proposes and tests the mediating role of organizational identification.
Details
Keywords
Purpose – This study develops a theoretical argument that social networks are embedded in the macro-level institutional environment. From the perspective of institutional…
Abstract
Purpose – This study develops a theoretical argument that social networks are embedded in the macro-level institutional environment. From the perspective of institutional embeddedness, I investigate the changing patterns and implications of social networks in job search and job earnings after China's overhaul of its employment system in the mid-1990s.Methodology/approach – The empirical evidence is drawn from 2003 Chinese General Social Survey data. I conduct statistical analyses to examine the roles of networks in job search and earning disparity by comparing two groups who obtained the job before and after the emerging labor market in urban China, respectively.Findings –Social networks have become much more popular in job search in the emerging labor market. Use of social networks in job search has also become more differentiated across job positions and employment organizations. While managerial status of the key helper and direct ties yield greater returns to employee earnings, strong indirect ties make less contribution to job earnings in the emerging labor market than that under the state-dominated employment system.Research implications – The findings suggest that we should analyze the concrete institutional environment to appreciate the roles of social networks in job search and social inequality.Originality/value – This study highlights that institutions are the key factor to shape the patterns and significance of social networks. As institutions evolve, network patterns and their significance can change accordingly.
Details
Keywords
Although it has been proved in the macro level, that institutional quality (IQ) has significant influence on a country's economic growth, international trading, resource…
Abstract
Purpose
Although it has been proved in the macro level, that institutional quality (IQ) has significant influence on a country's economic growth, international trading, resource allocation, development strategy and others, its direct influence on micro level, or firm level still remains ambiguous. In this article, the authors aim to focus on the influence of IQ of a company's original region on its financial performance. The authors choose H share companies as the sample and try to answer an interesting question that whether original region matters during the development of a company in abroad stock market.
Design/methodology/approach
This article uses a panel data of 120 H share firms, each ranges from 2005 to 2009. First, the authors use sectional analysis by SPSS19.0 to test the correlations and primary relationship among variables. Then, the authors use ordinary linear square (OLS) regression model to test the hypotheses with cross-sectional to reveal the primary results. In the end, the authors use STATA 11.0 to test panel data to decide the final results.
Findings
The authors concluded that private sector development and product market development have positive effects on corporate financial performance, while laws and regulations development have negative effect. Type of the first shareholder plays an important role partly between region IQ and corporate financial performance: to governance-CFP relationship, non-state shareholders perform better than state ones; to product market-CFP relationship, state shareholders perform better non-state ones.
Practical implications
In practices perspective, this conclusion is also inspirative. This study has implications for executives, too, and should help them to better manage their ownership structure. The results suggest that managers should choose first shareholder with critical thinking. Another way, this study has implications for governments-company interactions. It suggests that governments should engage in building an institution with high quality, so that every company will benefit from it.
Originality/value
This article is the first research on region-level relationship between IQ and corporate financial performance, which is consistent with the multi-level structure of institution concept. And the authors employ H share companies as the sample, which revealed more about the conflict between governance and market embedded in regional institution.
Details
Keywords
Ge Zhu, Shan Ao and Jianhua Dai
Switching cost is an important concept in the study of consumer loyalty which has implications for organizational business strategy and regulatory policies. Much research has…
Abstract
Purpose
Switching cost is an important concept in the study of consumer loyalty which has implications for organizational business strategy and regulatory policies. Much research has already examined the formation and influence of switching costs on the consumers' repeated purchase intentions, but little research has focused on quantitative measurement of the switching cost itself. This paper aims to address this issue.
Design/methodology/approach
By game theory, a complete Nash‐Bertrand model is proposed to accurately estimate consumer switching costs considering price compensation and transport costs in a duopoly. The relationship between switching costs and market structure is then analyzed by using the example of Hong Kong's wireless telecommunication market. From the observed data of China's wireless telecommunication industry, the model calculates switching costs per year of China Mobile and China Unicom's users respectively, as well as other variables.
Findings
The results demonstrate that reducing consumer switching costs will benefit small operators and increase competition in a winner‐take‐all market.
Originality/value
The model is valuable in calculating unseen switching costs and studying the impact of switching costs on market structure, especially for a duopoly in telecommunication.
Details
Keywords
Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…
Abstract
Purpose
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.
Design/methodology/approach
The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.
Findings
The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.
Practical implications
This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.
Social implications
The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.
Originality/value
This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.
Details
Keywords
Feng Cui, Dong Gao and Jianhua Zheng
The main reason for the low accuracy of magnetometer-based autonomous orbit determination is the coarse accuracy of the geomagnetic field model. Furthermore, the geomagnetic field…
Abstract
Purpose
The main reason for the low accuracy of magnetometer-based autonomous orbit determination is the coarse accuracy of the geomagnetic field model. Furthermore, the geomagnetic field model error increases obviously during geomagnetic storms, which can still further reduce the navigation accuracy. The purpose of this paper is to improve the accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms.
Design/methodology/approach
In this paper, magnetometer-based autonomous orbit determination via a measurement differencing extended Kalman filter (MDEKF) is studied. The MDEKF algorithm can effectively remove the time-correlated portion of the measurement error and thus can evidently improve the accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms. Real flight data from Swarm A are used to evaluate the performance of the MDEKF algorithm presented in this study. A performance comparison between the MDEKF algorithm and an extended Kalman filter (EKF) algorithm is investigated for different geomagnetic storms and sampling intervals.
Findings
The simulation results show that the MDEKF algorithm is superior to the EKF algorithm in terms of estimation accuracy and stability with a short sampling interval during geomagnetic storms. In addition, as the size of the geomagnetic storm increases, the advantages of the MDEKF algorithm over the EKF algorithm become more obvious.
Originality/value
The algorithm in this paper can improve the real-time accuracy of magnetometer-based autonomous orbit determination during geomagnetic storms with a low computational burden and is very suitable for low-orbit micro- and nano-satellites.
Details
Keywords
Jianhua Zhu, Luxin Wan, Huijuan Zhao, Longzhen Yu and Siyu Xiao
The purpose of this paper is to provide scientific guidance for the integration of industrialization and information (TIOII). In recent years, TIOII has promoted the development…
Abstract
Purpose
The purpose of this paper is to provide scientific guidance for the integration of industrialization and information (TIOII). In recent years, TIOII has promoted the development of intelligent manufacturing in China. However, many enterprises blindly invest in TIOII, which affects their normal production and operation.
Design/methodology/approach
This study establishes an efficiency evaluation model for TIOII. In this paper, entropy analytic hierarchy process (AHP) constraint cone and cross-efficiency are added based on traditional data envelopment analysis (DEA) model, and entropy AHP–cross-efficiency DEA model is proposed. Then, statistical analysis is carried out on the integration efficiency of enterprises in Guangzhou using cross-sectional data, and the traditional DEA model and entropy AHP–cross-efficiency DEA model are used to analyze the integration efficiency of enterprises.
Findings
The data show that the efficiency of enterprise integration is at a medium level in Guangzhou. The efficiency of enterprise integration has no significant relationship with enterprise size and production type but has a low negative correlation with the development level of enterprise integration. In addition, the improved DEA model can better reflect the real integration efficiency of enterprises and obtain complete ranking results.
Originality/value
By adding the entropy AHP constraint cone and cross-efficiency, the traditional DEA model is improved. The improved DEA model can better reflect the real efficiency of TIOII and obtain complete ranking results.
Details
Keywords
Jianfei Li, Mengxia Sun, Li Ren and Bei Li
The advent of the new retail era witnessed the consumers’ demand shift from on the traditional product quality to on the full supply chain service quality, and product service and…
Abstract
Purpose
The advent of the new retail era witnessed the consumers’ demand shift from on the traditional product quality to on the full supply chain service quality, and product service and service manufacturing is gradually taking shape. The purpose of this paper is to propose whether there is a “quality bridge” in the dynamic evolution process of retail service supply chain (RSSC) and discuss the system role, steady-state characteristics and dynamic evolution mechanism of service quality in this dynamic evolution process.
Design/methodology/approach
This paper proposes the dissipation system structure of the RSSC under the steady-state quality constraint, constructs a Markov chain model (MCM) for the evolution of the service quality of RSSC, and tests the objective existence of the steady-state distribution of the service quality by taking Chinese HJ retail enterprises as samples.
Findings
The research value of this study is summarized as follows. The research finds that the evolution of service quality of RSSC is a dynamic and non-linear growth process, which has significant characteristics of complex adaptability and steady-state convergence. The study finds that the quality evolution process of the RSSC is a steady-state convergence process, and there is a steady-state distribution of quality in its co-evolution, in which different process input levels have a significant positive impact on the stable level of quality state. The study finds that the steady state of quality plays a crucial role in the collaborative evolution of the RSSC, that is, when the service quality reaches a certain steady state distribution, the operating efficiency and profit level of the whole chain will show an “explosive” growth trend.
Originality/value
Quality bridge, an original concept in this paper, represents the role of quality steady-state in the operation of RSSC. Based on Markov chain and system simulation tools, this paper verifies the existence of steady-state service quality and its positive effect on the co-evolution and sustainable development of RSSC. When the service quality reaches a certain steady distribution, the operating efficiency and income level of the whole chain will show n trend of explosive growth.
Details
Keywords
This study aims to understand the epistemic foundation of the classification applied in the first Chinese library catalogue, the Seven Epitomes (Qilue).
Abstract
Purpose
This study aims to understand the epistemic foundation of the classification applied in the first Chinese library catalogue, the Seven Epitomes (Qilue).
Design/methodology/approach
Originating from a theoretical stance that situates knowledge organization in its social context, the study applies a multifaceted framework pertaining to five categories of textual data: the Seven Epitomes; biographical information about the classificationist Liu Xin; and the relevant intellectual, political, and technological history.
Findings
The study discovers seven principles contributing to the epistemic foundation of the catalogue's classification: the Han imperial library collection imposed as the literary warrant; government functions considered for structuring texts; classicist morality determining the main classificatory structure; knowledge perceived and organized as a unity; objects, rather than subjects, of concern affecting categories at the main class level; correlative thinking connecting all text categories to a supreme knowledge embodied by the Six Classics; and classicist moral values resulting in both vertical and horizontal hierarchies among categories as well as texts.
Research limitations/implications
A major limitation of the study is its focus on the main classes, with limited attention to subclasses. Future research can extend the analysis to examine subclasses of the same scheme. Findings from these studies may lead to a comparison between the epistemic approach in the target classification and the analytic one common in today's bibliographic classification.
Originality/value
The study is the first to examine in depth the epistemic foundation of traditional Chinese bibliographic classification, anchoring the classification in its appropriate social and historical context.
Details
Keywords
Liqiong Chen, Lei Yunjie and Sun Huaiying
This study aims to solve the problems of large training sample size, low data sample quality, low efficiency of the currently used classical model, high computational complexity…
Abstract
Purpose
This study aims to solve the problems of large training sample size, low data sample quality, low efficiency of the currently used classical model, high computational complexity of the existing concern mechanism, and high graphics processing unit (GPU) occupancy in the current visualization software defect prediction, proposing a method for software defect prediction termed recurrent criss-cross attention for weighted activation functions of recurrent SE-ResNet (RCCA-WRSR). First, following code visualization, the activation functions of the SE-ResNet model are replaced with a weighted combination of Relu and Elu to enhance model convergence. Additionally, an SE module is added before it to filter feature information, eliminating low-weight features to generate an improved residual network model, WRSR. To focus more on contextual information and establish connections between a pixel and those not in the same cross-path, the visualized red as integer, green as integer, blue as integer images are inputted into a model incorporating a fused RCCA module for defect prediction.
Design/methodology/approach
Software defect prediction based on code visualization is a new software defect prediction technology, which mainly realizes the defect prediction of code by visualizing code as image, and then applying attention mechanism to extract the features of image. However, the challenges of current visualization software defect prediction mainly include the large training sample size and low sample quality of the data, and the classical models used today are not efficient, and the existing attention mechanisms have high computational complexity and high GPU occupancy.
Findings
Experimental evaluation using ten open-source Java data sets from PROMISE and five existing methods demonstrates that the proposed approach achieves an F-measure value of 0.637 in predicting 16 cross-version projects, representing a 6.1% improvement.
Originality/value
RCCA-WRSR is a new visual software defect prediction based on recurrent criss-cross attention and improved residual network. This method effectively enhances the performance of software defect prediction.