Search results

1 – 50 of over 10000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Book part
Publication date: 14 July 2006

Duangkamon Chotikapanich and William E. Griffiths

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and…

Abstract

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and Duclos (2000) and references therein. Such tests are useful for assessing progress towards eliminating poverty and for evaluating the effectiveness of various policy initiatives directed towards welfare improvement. To date the focus in the literature has been on sampling theory tests. Such tests can be set up in various ways, with dominance as the null or alternative hypothesis, and with dominance in either direction (X dominates Y or Y dominates X). The result of a test is expressed as rejection of, or failure to reject, a null hypothesis. In this paper, we develop and apply Bayesian methods of inference to problems of Lorenz and stochastic dominance. The result from a comparison of two income distributions is reported in terms of the posterior probabilities for each of the three possible outcomes: (a) X dominates Y, (b) Y dominates X, and (c) neither X nor Y is dominant. Reporting results about uncertain outcomes in terms of probabilities has the advantage of being more informative than a simple reject/do-not-reject outcome. Whether a probability is sufficiently high or low for a policy maker to take a particular action is then a decision for that policy maker.

The methodology is applied to data for Canada from the Family Expenditure Survey for the years 1978 and 1986. We assess the likelihood of dominance from one time period to the next. Two alternative assumptions are made about the income distributions – Dagum and Singh-Maddala – and in each case the posterior probability of dominance is given by the proportion of times a relevant parameter inequality is satisfied by the posterior observations generated by Markov chain Monte Carlo.

Details

Dynamics of Inequality and Poverty
Type: Book
ISBN: 978-0-76231-350-1

Access Restricted. View access options
Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

120

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

Access Restricted. View access options
Article
Publication date: 23 January 2019

Rakesh Ranjan, Subrata Kumar Ghosh and Manoj Kumar

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and…

188

Abstract

Purpose

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and modelled. The paper aims to find an appropriate probability distribution model to forecast the kind of wear particles at different running hour of the machine.

Design/methodology/approach

Used gear oil of the planetary gear box of a slab caster was drained out and charged with a fresh oil of grade (EP-460). Six chronological oil samples were collected at different time interval between 480 and 1,992 h of machine running. The oil samples were filtered to separate wear particles, and microscopic study of wear debris was carried out at 100X magnification. Statistical modelling of wear debris distribution was done using Weibull and exponential probability distribution model. A comparison was studied among actual, Weibull and exponential probability distribution of major length and aspect ratio of wear particles.

Findings

Distribution of major length of wear particle was found to be closer to the exponential probability density function, whereas Weibull probability density function fitted better to distribution of aspect ratio of wear particle.

Originality/value

The potential of the developed model can be used to analyse the distribution of major length and aspect ratio of wear debris present in planetary gear box of slab caster machine.

Details

Industrial Lubrication and Tribology, vol. 71 no. 2
Type: Research Article
ISSN: 0036-8792

Keywords

Access Restricted. View access options
Article
Publication date: 5 February 2018

Damaris Serigatto Vicentin, Brena Bezerra Silva, Isabela Piccirillo, Fernanda Campos Bueno and Pedro Carlos Oprime

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

514

Abstract

Purpose

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

Design/methodology/approach

Data were collected during production of a wheat-based dough in a food industry and the control charts were developed with these steps: to collect the master sample from different production batches; to verify, by graphical methods, the quantity and the characterization of the number of mixing probability distributions in the production batch; to adjust the theoretical model of probability distribution of each subpopulation in the production batch; to make a statistical model considering the mixture distribution of probability and assuming that the statistical parameters are unknown; to determine control limits; and to compare the mixture chart with traditional control chart.

Findings

A graph was developed for monitoring a multi-stream process composed by some parameters considered in its calculation with similar efficiency to the traditional control chart.

Originality/value

The control chart can be an efficient tool for customers that receive product batches continuously from a supplier and need to monitor statistically the critical quality parameters.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 1 August 2005

Degan Zhang, Guanping Zeng, Enyi Chen and Baopeng Zhang

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the…

212

Abstract

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the movement or shift of the user, its uncertainty often exists. Context‐aware computing with uncertainty includes obtaining context information, forming model, fusing of aware context and managing context information. In this paper, we focus on modeling and computing of aware context information with uncertainty for making dynamic decision during seamless mobility. Our insight is to combine dynamic context‐aware computing with improved Random Set Theory (RST) and extended D‐S Evidence Theory (EDS). We re‐examine formalism of random set, argue the limitations of the direct numerical approaches, give new modeling mode based on RST for aware context and propose our computing approach of modeled aware context.In addition, we extend classic D‐S Evidence Theory after considering context’s reliability, time‐efficiency and relativity, compare relative computing methods. After enumerating experimental examples of our active space, we provide the evaluation. By comparisons, the validity of new context‐aware computing approach based on RST or EDS for ubiquitous active service with uncertainty information has been successfully tested.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Access Restricted. View access options
Book part
Publication date: 24 January 2022

Eleonora Pantano and Kim Willems

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to…

Abstract

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to maximize the number of visitors they attract in order to optimize returns and profits. On the other hand, ensuring a pleasurable, efficient and COVID-19-proof shopping experience, would go against an excessive concentration of shoppers. Fulfilling both retailer and consumer perspectives requires a delicate balance to be struck. This chapter aims at supporting retailers in making informed decisions, by clarifying the extent to which store layouts influence (perceived) consumer density. Specifically, the chapter illustrates how new technologies and methodologies (i.e. agent-based simulation) can help in predicting a store layout's ability to reduce consumers' perceived in-store spatial density and related perceptions of human crowding, while also ensuring a certain retailers' profitability.

Access Restricted. View access options
Article
Publication date: 2 March 2012

G. Mora and J.C. Navarro

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

208

Abstract

Purpose

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

Design/methodology/approach

This will be done by using a basic α‐densification technique and adding the new concept of sequence of α‐dense curves with accumulated density to improve the resolution of some global optimization problems.

Findings

It is found that the new technique based on sequences of α‐dense curves with accumulated densities allows to simplify considerably the process consisting on the exploration of the set of optimizer points of an objective function with feasible set a parallelepiped K of RN. Indeed, since the sequence of the images of the curves of a sequence of α‐dense curves with accumulated density is expansive, in each new step of the algorithm it is only necessary to explore a residual subset. On the other hand, since the sequence of their densities is decreasing and tends to zero, the convergence of the algorithm is assured.

Practical implications

The results of this new technique of densification by sequences of α‐dense curves with accumulated densities will be applied to densify the feasible set of an objective function which minimizes the quadratic error produced by the adjustment of a model based on a beta probability density function which is largely used in studies on the transition‐time of forest vegetation.

Originality/value

A sequence of α‐dense curves with accumulated density represents an original concept to be added to the set of techniques to optimize a multivariable function by the reduction to only one variable as a new application of α‐dense curves theory to the global optimization.

Access Restricted. View access options
Article
Publication date: 18 July 2019

Zahid Hussain Hulio and Wei Jiang

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density

191

Abstract

Purpose

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density collected over a period of one year.

Design/methodology/approach

The site-specific air density, wind shear, wind power density, annual energy yield and capacity factors have been calculated at 30 and 10 m above the ground level (AGL). The Weibull parameters have been calculated using empirical, maximum likelihood, modified maximum likelihood, energy pattern and graphical methods to determine the other dependent parameters. The accuracies of these methods are determined using correlation coefficient (R²) and root mean square error (RMSE) values.

Findings

The site-specific wind shear coefficient was found to be 0.18. The annual mean wind speeds were found to be 5.174 and 4.670 m/s at 30 and 10 m heights, respectively, with corresponding standard deviations of 2.085 and 2.059. The mean wind power densities were found to be 59.50 and 46.75 W/m² at 30 and 10 m heights, respectively. According to the economic assessment, the wind turbine A is capable of producing wind energy at the lowest value of US$ 0.034/kWh.

Practical implications

This assessment provides the sustainable solution of energy which minimizes the dependence on continuous supply of oil and gas to run the conventional power plants that is a major cause of increasing load shedding in the significant industrial and thickly populated city of Pakistan. Also, this will minimize the quarrel between the local power producer and oil and gas supplier during the peak season.

Social implications

This wind resource assessment has some important social implications including decreasing the environmental issues, enhancing the uninterrupted supply of electricity and decreasing cost of energy per kWh for the masses of Karachi.

Originality/value

The results are showing that the location can be used for installing the wind energy power plant at the lower cost per kWh compared to other energy sources. The wind energy is termed as sustainable solution at the lowest cost.

Details

International Journal of Energy Sector Management, vol. 14 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

Access Restricted. View access options
Article
Publication date: 1 April 1993

Guy Jumarie

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the…

74

Abstract

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the informational entropy of the system. The concept of informational entropy of deterministic functions provides a straightforward modelling of Brillouin's negentropy (negative entropy), therefore a system can be characterized by its complexity and its dual complexity. States composition laws for complexities expressed in terms of Shannonian entropy with or without probability, and then the approach is extended to quantum entropy of non‐probabilistic data. Outlines some suggestions for future investigation.

Details

Kybernetes, vol. 22 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Available. Content available
Article
Publication date: 17 October 2023

Zhixun Wen, Fei Li and Ming Li

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue…

616

Abstract

Purpose

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue life on this basis. The crack propagation law of SX material at different temperatures and the weak correlation of EIFS values verification under different loading conditions are also investigated.

Design/methodology/approach

A three-parameter time to crack initial (TTCI) method with multiple reference crack lengths under different loading conditions is established, which include the TTCI backstepping method and EIFS fitting method. Subsequently, the optimized EIFS distribution is obtained based on the random crack propagation rate and maximum likelihood estimation of median fatigue life. Then, an effective driving force based on anisotropic and mixed crack propagation mode is proposed to describe the crack propagation rate in the small crack stage. Finally, the fatigue life of three different temperature ESE(T) standard specimens is predicted based on the EIFS values under different survival rates.

Findings

The optimized EIFS distribution based on EIFS fitting - maximum likelihood estimation (MLE) method has the highest accuracy in predicting the total fatigue life, with the range of EIFS values being about [0.0028, 0.0875] (mm), and the mean value of EIFS being 0.0506 mm. The error between the predicted fatigue life based on the crack propagation rate and EIFS distribution for survival rates ranges from 5% to 95% and the experimental life is within two times dispersion band.

Originality/value

This paper systematically proposes a new anisotropic material EIFS prediction method, establishing a framework for predicting the fatigue life of SX material at different temperatures using fracture mechanics to avoid inaccurate anisotropic constitutive models and fatigue damage accumulation theory.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 6
Type: Research Article
ISSN: 1573-6105

Keywords

Access Restricted. View access options
Article
Publication date: 7 September 2015

M. Navabi and R. Hamrah

The purpose of this paper is to perform a comparative study of two propagation models and a prediction of proximity distances among the space objects based on the two-line element…

488

Abstract

Purpose

The purpose of this paper is to perform a comparative study of two propagation models and a prediction of proximity distances among the space objects based on the two-line element set (TLEs) data, which identifies potentially risky approaches and is used to compute the probability of collision among the spacecrafts.

Design/methodology/approach

At first, the proximities are estimated for the mentioned satellites using a precise propagation model and based on a one-month simulation. Then, a study is performed to determine the probability of collision between two satellites using a formulation which takes into account the object sizes, covariance data and the relative distance at the point of closest approach. Simplifying assumptions such as a linear relative motion and normally distributed position uncertainties at the predicted closest approach time are applied in estimation.

Findings

For the case of Iridium-Cosmos collision and the prediction of a closest approach using available TLE orbital data and a propagation model which takes into account the effects of the earth’s zonal harmonics and drag atmospheric, the maximum probability of about 2 × 10 −6 was obtained, which can indicate the necessity of enacting avoidance maneuvers regarding the defined a probability threshold by satellite’s owner.

Originality/value

The contribution of this paper is to analyze and simulate the 2009 prominent collision between the Cosmos2251 and Iridium33 satellite by modeling their orbit propagation, predicting their closest approaches and, finally, assessing the risk of the possible collision. Moreover, an enhanced orbit determination can be effective to achieve an accurate assessment of the ongoing collision threat to active spacecrafts from orbital debris and preventing, if necessary, the hazards thereof.

Details

Aircraft Engineering and Aerospace Technology: An International Journal, vol. 87 no. 5
Type: Research Article
ISSN: 0002-2667

Keywords

Access Restricted. View access options
Article
Publication date: 12 September 2008

N.N. Puscas

The purpose of this paper is to propose modelling of the noise of an improved method for the measurement of small displacement and vibrations. It is based on a novel method for…

185

Abstract

Purpose

The purpose of this paper is to propose modelling of the noise of an improved method for the measurement of small displacement and vibrations. It is based on a novel method for overcoming DC drift in RF subcarrier phase detection scheme.

Design/methodology/approach

The method works in open loop and is characterized by low distortions, good signal‐to‐noise ratio and rather low cost.

Findings

Considering a stationary Gaussian stochastic process, the paper evaluated and modelled the power spectral density and the probability density against the phase error and the phase noise parameter.

Practical implications

This offers an improvement of vibration, displacement and seismic sensors.

Originality/value

Based on a novel method for overcoming DC drift in RF sub‐carrier phase detection scheme for fibre optic sensors, an improved method for displacement and vibration measurement is proposed. The presented method is characterized by rather low distortions in the modulation process and small distances measurement, low distortions and low‐cost electronic systems.

Details

Sensor Review, vol. 28 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Access Restricted. View access options
Article
Publication date: 8 July 2021

Zahid Hussain Hulio, Gm Yousufzai and Wei Jiang

Pakistan is an energy starving country that needs continuous supply of energy to keep up its economic speed. The aim of this paper is to assess the wind resource and energy…

97

Abstract

Purpose

Pakistan is an energy starving country that needs continuous supply of energy to keep up its economic speed. The aim of this paper is to assess the wind resource and energy potential of Quaidabad site for minimizing the dependence on fuels and improving the environment.

Design/methodology/approach

The Quaidabad site wind shear coefficient and turbulence intensity factor are investigated. The two-parameter k and c Weibull distribution function is used to analyze the wind speed of site. The standard deviation of the site is also assessed for a period of a year. The wind power density and energy density are assessed for a period of a year. The economic assessment of energy/kWh is investigated for selection of appropriate wind turbine.

Findings

The mean wind shear coefficient was observed to be 0.2719, 0.2191 and 0.1698 at 20, 40 and 60 m, respectively, for a period of a year. The mean wind speed is found to be 2.961, 3.563, 3.907 and 4.099 m/s at 20, 40, 60 and 80 m, respectively. The mean values of k parameters were observed to be 1.563, 2.092, 2.434 and 2.576 at 20, 40, 60 and 80 m, respectively, for a period of a year. The mean values of c m/s parameter were found to be 3.341, 4.020, 4.408 and 4.625 m/s at 20, 40, 60 and 80 m, respectively, for a period of a year. The major portion of values of standard deviation was found to be in between 0.1 and 2.00 at 20, 40, 60 and 80 m. The wind power density (W/m2) sum total values were observed to be 351, 597, 792 and 923 W/m2 at 20, 40, 60 and 80 m, respectively, for a period of a year. The mean coefficient of variation was found to be 0.161, 0.130, 0.115 and 0.105 at 20, 40, 60 and 80 m, respectively. The sum total energy density was observed to be 1,157, 2,156, 2,970 and 3,778 kWh/m2 at 20, 40, 60 and 80 m, respectively. The economic assessment is showing that wind turbine E has the minimum cost US$0.049/kWh.

Originality/value

The Quaidabad site is suitable for installing the utility wind turbines for energy generation at the lowest cost.

Access Restricted. View access options
Article
Publication date: 26 August 2014

Bruce J. Sherrick, Christopher A. Lanoue, Joshua Woodard, Gary D. Schnitkey and Nicholas D. Paulson

The purpose of this paper is to contribute to the empirical evidence about crop yield distributions that are often used in practical models evaluating crop yield risk and…

564

Abstract

Purpose

The purpose of this paper is to contribute to the empirical evidence about crop yield distributions that are often used in practical models evaluating crop yield risk and insurance. Additionally, a simulation approach is used to compare the performance of alternative specifications when the underlying form is not known, to identify implications for the choice of parameterization of yield distributions in modeling contexts.

Design/methodology/approach

Using a unique high-quality farm-level corn yield data set, commonly used parametric, semi-parametric, and non-parametric distributions are examined against widely used in-sample goodness-of-fit (GOF) measures. Then, a simulation framework is used to assess the out-of-sample characteristics by using known distributions to generate samples that are assessed in an insurance valuation context under alternative specifications of the yield distribution.

Findings

Bias and efficiency trade-offs are identified for both in- and out-of-sample contexts, including a simple insurance rating application. Use of GOF measures in small samples can lead to inappropriate selection of candidate distributions that perform poorly in straightforward economic applications. The β distribution consistently overstates rates even when fitted to data generated from a β distribution, while the Weibull consistently understates rates; though small sample features slightly favor Weibull. The TCMN and kernel density estimators are least biased in-sample, but can perform very badly out-of-sample due to overfitting issues. The TCMN performs reasonably well across sample sizes and initial conditions.

Practical implications

Economic applications should consider the consequence of bias vs efficiency in the selection of characterizations of yield risk. Parsimonious specifications often outperform more complex characterizations of yield distributions in small sample settings, and in cases where more demanding uses of extreme-event probabilities are required.

Originality/value

The study helps provide guidance on the selection of distributions used to characterize yield risk and provides an extensive empirical demonstration of yield risk measures across a high-quality set of actual farm experiences. The out-of-sample examination provides evidence of the impact of sample size, underlying variability, and region of the probability measure used on the performance of candidate distributions.

Details

Agricultural Finance Review, vol. 74 no. 3
Type: Research Article
ISSN: 0002-1466

Keywords

Access Restricted. View access options
Book part
Publication date: 15 January 2010

Isobel Claire Gormley and Thomas Brendan Murphy

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys…

Abstract

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys. Covariate data associated with the judges are also often recorded. Such covariate data should be used in conjunction with preference data when drawing inferences about judges.

To cluster a population of judges, the population is modeled as a collection of homogeneous groups. The Plackett-Luce model for ranked data is employed to model a judge's ranked preferences within a group. A mixture of Plackett- Luce models is employed to model the population of judges, where each component in the mixture represents a group of judges.

Mixture of experts models provide a framework in which covariates are included in mixture models. Covariates are included through the mixing proportions and the component density parameters. A mixture of experts model for ranked preference data is developed by combining a mixture of experts model and a mixture of Plackett-Luce models. Particular attention is given to the manner in which covariates enter the model. The mixing proportions and group specific parameters are potentially dependent on covariates. Model selection procedures are employed to choose optimal models.

Model parameters are estimated via the ‘EMM algorithm’, a hybrid of the expectation–maximization and the minorization–maximization algorithms. Examples are provided through a menu survey and through Irish election data. Results indicate mixture modeling using covariates is insightful when examining a population of judges who express preferences.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Access Restricted. View access options
Article
Publication date: 1 August 1997

Rick L. Edgeman and Dennis K.J. Lin

Acceptance sampling can be both time‐consuming and destructive so that it is desirable to arrive at a sound lot disposition decision in a timely manner. Sequential sampling plans…

407

Abstract

Acceptance sampling can be both time‐consuming and destructive so that it is desirable to arrive at a sound lot disposition decision in a timely manner. Sequential sampling plans are attractive since they offer a lower average sample number than do matched single, double, or multiple sampling plans. Analogously, cumulative sum control charts offer the ability to detect moderate process shifts more rapidly than do Shewhart control charts applied to the same process. The inverse Gaussian distribution is flexible and is often the model of choice in accelerated life testing applications where early failure times predominate. Based on sequential probability ratio tests (SPRT), sequential sampling/ cumulative sum (CUSUM) plans provide timely, statistically based decisions. Presents SPRT and CUSUM results for the inverse Gaussian process mean. Also presents a simple goodness‐of‐fit test for the inverse Gaussian distribution which allows for model adequacy checking.

Details

International Journal of Quality & Reliability Management, vol. 14 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 8 December 2020

Zahid Hussain Hulio

The objective of this paper to assess the wind energy potential of the Sujawal site for minimizing the dependence on fossil fuels.

105

Abstract

Purpose

The objective of this paper to assess the wind energy potential of the Sujawal site for minimizing the dependence on fossil fuels.

Design/methodology/approach

The site-specific wind shear coefficient and the turbulence model were investigated. The two-parameter, k and c, Weibull distribution function was used to analyze the wind speed of the Sujawal site. The standard deviation of the site was also assessed for a period of a year. Also, the coefficient of variation was carried out to determine the difference at each height. The wind power and energy densities were assessed for a period of a year. The economic assessment of energy/kWh was investigated for selection of appropriate wind turbine.

Findings

The mean wind shear of the Sujawal site was found to be 0.274. The mean wind speed was found to be 7.458, 6.911, 6.438 and 5.347 at 80, 60, 40 and 20 m, respectively, above the ground level (AGL). The mean values of k parameter were observed to be 2.302, 2.767, 3.026 and 3.105 at 20, 40, 60 and 80 m, respectively, for a period of a year. The Weibull c m/s parameter values were found to be 8.415, 7.797, 7.265 and 6.084 m/s at 80, 60, 40 and 20 m, respectively. The mean values of standard deviation were found to be 0.765, 0.737, 0.681 and 0.650 at 20, 40, 60, and 80 m, respectively. The mean wind power density (W/m2) was found to be 287.33, 357.16, 405.16 and 659.58 for 20, 40, 60 and 80 m, respectively. The economic assessment showed that wind turbine 7 had the minimum cost/kWh US$ 0.0298.

Originality/value

The Sujawal site is suitable for installing the utility wind turbines for energy generation at the lowest cost; hence, a sustainable solution.

Details

World Journal of Science, Technology and Sustainable Development, vol. 18 no. 1
Type: Research Article
ISSN: 2042-5945

Keywords

Available. Open Access. Open Access
Article
Publication date: 20 July 2020

Lijuan Shi, Zuoning Jia, Huize Sun, Mingshu Tian and Liquan Chen

This paper aims to study the affecting factors on bird nesting on electronic railway catenary lines and the impact of bird nesting events on railway operation.

3368

Abstract

Purpose

This paper aims to study the affecting factors on bird nesting on electronic railway catenary lines and the impact of bird nesting events on railway operation.

Design/methodology/approach

First, with one year’s bird nest events in the form of unstructured natural language collected from Shanghai Railway Bureau, the records were structured with the help of python software tool. Second, the method of root cause analysis (RCA) was used to identify all the possible influencing factors which are inclined to affect the probability of bird nesting. Third, the possible factors then were classified into two categories to meet subsequent analysis separately, category one was outside factors (i.e. geographic conditions related factors), the other was inside factors (i.e. railway related factors).

Findings

It was observed that factors of city population, geographic position affect nesting observably. Then it was demonstrated that both location and nesting on equipment part have no correlation with delay, while railway type had a significant but low correlation with delay.

Originality/value

This paper discloses the principle of impacts of nest events on railway operation.

Details

Smart and Resilient Transportation, vol. 2 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Available. Open Access. Open Access
Article
Publication date: 6 January 2023

RS. Koteeshwari and B. Malarkodi

Among the proposed radio access strategies for improving system execution in 5G networks, the non-orthogonal multiple access (NOMA) scheme is the prominent one.

1162

Abstract

Purpose

Among the proposed radio access strategies for improving system execution in 5G networks, the non-orthogonal multiple access (NOMA) scheme is the prominent one.

Design/methodology/approach

Among the most fundamental NOMA methods, power-domain NOMA is the one where at the transmitter, superposition coding is used, and at the receiver, successive interference cancellation (SIC) is used. The importance of power allocation (PA) in achieving appreciable SIC and high system throughput cannot be overstated.

Findings

This research focuses on an outage probability analysis of NOMA downlink system under various channel conditions like Rayleigh, Rician and Nakagami-m fading channel. The system design's objectives, techniques and constraints for NOMA-based 5G networks' PA strategies are comprehensively studied.

Practical implications

From the results of this study, it is found that the outage probability performance of downlink ordered NOMA under Rayleigh, Rician and Nakagami-m fading channel was good.

Originality/value

Outage probability analysis of downlink ordered NOMA under various channel conditions like Rayleigh, Rician and Nakagami-m fading channels were employed. Though the performance of Nakagami-m fading channel is lesser compared to Rayleigh channel, the performance for user 1 and user 2 are good.

Details

Arab Gulf Journal of Scientific Research, vol. 41 no. 4
Type: Research Article
ISSN: 1985-9899

Keywords

Access Restricted. View access options
Article
Publication date: 11 January 2022

Wei Yang, Afshin Firouzi and Chun-Qing Li

The purpose of this paper is to demonstrate the applicability of the Credit Default Swaps (CDS), as a financial instrument, for transferring of risk in project finance loans…

248

Abstract

Purpose

The purpose of this paper is to demonstrate the applicability of the Credit Default Swaps (CDS), as a financial instrument, for transferring of risk in project finance loans. Also, an equation has been derived for pricing of CDS spreads.

Design/methodology/approach

The debt service cover ratio (DSCR) is modeled as a Brownian Motion (BM) with a power-law model fitted to the mean and half-variance of the existing data set of DSCRs. The survival probability of DSCR is calculated during the operational phase of the project finance deal, using a closed-form analytical method, and the results are verified by Monte Carlo simulation (MCS).

Findings

It is found that using the power-law model yields higher CDS premiums. This in turn confirms the necessity of conducting rigorous statistical analysis in fitting the best performing model as uninformed reliance on constant time-invariant drift and diffusion model can erroneously result in smaller CDS spreads. A sensitivity analysis also shows that the results are very sensitive to the recovery rate and cost of debt values.

Originality/value

Insufficiency of free cash flow is a major risk in the toll road project finance and hence there is a need to develop innovative financial instruments for risk management. In this paper, a novel valuation method of CDS is proposed assuming that DSCR follows the BM stochastic process.

Details

Journal of Financial Management of Property and Construction , vol. 28 no. 1
Type: Research Article
ISSN: 1366-4387

Keywords

Access Restricted. View access options
Article
Publication date: 28 October 2014

Yir-Hueih Luh, Wun-Ji Jiang and Yu-Ning Chien

The purpose of this paper is to present an integrated analysis of determining factors of farmers’ genetically modified (GM) technology adoption behavior, with a special emphasis…

571

Abstract

Purpose

The purpose of this paper is to present an integrated analysis of determining factors of farmers’ genetically modified (GM) technology adoption behavior, with a special emphasis on information acquisition, knowledge accumulation, product attributes and technology traits.

Design/methodology/approach

Extending the expected profit maximization framework into a random utility model which accommodates joint decisions of information acquisition and technology adoption, the authors use the full information maximum likelihood method to yield both consistent and efficient estimates. The model is applied to a field survey collecting a sample of 141 randomly selected bananas farmers.

Findings

The empirical results indicate information acquired through social network will increase the probability of adoption. Knowledge accumulation as depicted by education and farming experience is found to play a role in farmers’ technology adoption, whereas disease-resistant technology trait and flavor-enriching product attribute of GM bananas also appear to be important determinants for GM seeds adoption in Taiwan.

Practical implications

Empirical evidence supports significance of technology traits and product attributes in farmer's GM technology adoption, suggesting the close collaboration between industry, government and academia is the key to successful commercialization of GM crops.

Social implications

Understanding the determinants of farmers’ GM technology adoption can serve as the basis for promoting new biotechnology, and thus can facilitate the establishment of tenable solutions to food security issues.

Originality/value

This paper is the first attempt to incorporate information acquisition into the behavioral analysis of GM technology adoption. The present study also extends previous literature by considering influential factors related to both consumers’ and producers’ preferences in modeling technology adoption.

Details

China Agricultural Economic Review, vol. 6 no. 4
Type: Research Article
ISSN: 1756-137X

Keywords

Access Restricted. View access options
Article
Publication date: 1 March 1992

John Walker

Presents a simple method for determining an optimum ordering policyfor the single‐period inventory problem with a set‐up cost of orderingand the uncertain total demand over the…

121

Abstract

Presents a simple method for determining an optimum ordering policy for the single‐period inventory problem with a set‐up cost of ordering and the uncertain total demand over the period represented by a uniform probability density function, the distribution reflecting the decision maker′s degree of belief that all values of total demand outside two (possibly “soft”) limits are barely credible and that all values within the limits are equally likely. The method is simple to implement and allows easy sensitivity analysis of the results to perturbations in the estimates of the problem parameters. Presents a numerical example to illustrate the essential features of the method.

Details

International Journal of Operations & Production Management, vol. 12 no. 3
Type: Research Article
ISSN: 0144-3577

Keywords

Access Restricted. View access options
Article
Publication date: 26 July 2021

Shaun Shuxun Wang, Jing Rong Goh, Didier Sornette, He Wang and Esther Ying Yang

Many governments are taking measures in support of small and medium-sized enterprises (SMEs) to mitigate the economic impact of the COVID-19 outbreak. This paper presents a…

2047

Abstract

Purpose

Many governments are taking measures in support of small and medium-sized enterprises (SMEs) to mitigate the economic impact of the COVID-19 outbreak. This paper presents a theoretical model for evaluating various government measures, including insurance for bank loans, interest rate subsidy, bridge loans and relief of tax burdens.

Design/methodology/approach

This paper distinguishes a firm's intrinsic value and book value, where a firm can lose its intrinsic value when it encounters cash-flow crunch. Wang transform is applied to (1) calculating the appropriate level of interest rate subsidy payable to incentivize banks to issue more loans to SMEs and to extend the loan maturity of current debt to the SMEs, (2) describing the frailty distribution for SMEs and (3) defining banks' underwriting capability and overlap index in risk selection.

Findings

Government support for SMEs can be in the form of an appropriate level of interest rate subsidy payable to incentivize banks to issue more loans to SMEs and to extend the loan maturity of current debt to the SMEs.

Research limitations/implications

More available data on bank loans would have helped strengthen the empirical studies.

Practical implications

This paper makes policy recommendations of establishing policy-oriented banks or investment funds dedicated to supporting SMEs, developing risk indices for SMEs to facilitate refined risk underwriting, providing SMEs with long-term tax relief and early-stage equity-type investments.

Social implications

The model highlights the importance of providing bridge loans to SMEs during the COVID-19 disruption to prevent massive business closures.

Originality/value

This paper provides an analytical framework using Wang transform for analyzing the most effective form of government support for SMEs.

Details

China Finance Review International, vol. 11 no. 3
Type: Research Article
ISSN: 2044-1398

Keywords

Access Restricted. View access options
Article
Publication date: 23 August 2013

P.F.G. Banfill, D.P. Jenkins, S. Patidar, M. Gul, G.F. Menzies and G.J. Gibson

The work set out to design and develop an overheating risk tool using the UKCP09 climate projections that is compatible with building performance simulation software. The aim of…

369

Abstract

Purpose

The work set out to design and develop an overheating risk tool using the UKCP09 climate projections that is compatible with building performance simulation software. The aim of the tool is to exploit the Weather Generator and give a reasonably accurate assessment of a building's performance in future climates, without adding significant time, cost or complexity to the design team's work.

Methodology/approach

Because simulating every possible future climate is impracticable, the approach adopted was to use principal component analysis to give a statistically rigorous simplification of the climate projections. The perceptions and requirements of potential users were assessed through surveys, interviews and focus groups.

Findings

It is possible to convert a single dynamic simulation output into many hundreds of simulation results at hourly resolution for equally probable climates, giving a population of outcomes for the performance of a specific building in a future climate, thus helping the user choose adaptations that might reduce the risk of overheating. The tool outputs can be delivered as a probabilistic overheating curve and feed into a risk management matrix. Professionals recognized the need to quantify overheating risk, particularly for non‐domestic buildings, and were concerned about the ease of incorporating the UKCP09 projections into this process. The new tool has the potential to meet these concerns.

Originality/value

The paper is the first attempt to link UKCP09 climate projections and building performance simulation software in this way and the work offers the potential for design practitioners to use the tool to quickly assess the risk of overheating in their designs and adapt them accordingly.

Details

Structural Survey, vol. 31 no. 4
Type: Research Article
ISSN: 0263-080X

Keywords

Access Restricted. View access options
Article
Publication date: 9 May 2008

D. Voyer, F. Musy, L. Nicolas and R. Perrussel

The aim is to apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters.

243

Abstract

Purpose

The aim is to apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters.

Design/methodology/approach

A classic finite element method is coupled with probabilistic methods. These probabilistic methods are based on the expansion of the random parameters in two different ways: a spectral expansion and a nodal expansion.

Findings

The computation of the mean and the variance on a simple scattering problem shows that only a few hundreds calculations are required when applying these methods while the Monte Carlo method uses several thousands of samples in order to obtain a comparable accuracy.

Originality/value

The number of calculations is reduced using several techniques: a regression technique, sparse grids computed from Smolyak algorithm or a suited coordinate system.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Access Restricted. View access options
Article
Publication date: 18 November 2013

Mica Grujicic, Subrahmanian Ramaswami, Jennifer Snipes, Ramin Yavari, Gary Lickfield, Chian-Fong Yen and Bryan Cheeseman

A series of all-atom molecular-level computational analyses is carried out in order to investigate mechanical transverse (and longitudinal) elastic stiffness and strength of p

598

Abstract

Purpose

A series of all-atom molecular-level computational analyses is carried out in order to investigate mechanical transverse (and longitudinal) elastic stiffness and strength of p-phenylene terephthalamide (PPTA) fibrils/fibers and the effect various microstructural/topological defects have on this behavior. The paper aims to discuss these issues.

Design/methodology/approach

To construct various defects within the molecular-level model, the relevant open-literature experimental and computational results were utilized, while the concentration of defects was set to the values generally encountered under “prototypical” polymer synthesis and fiber fabrication conditions.

Findings

The results obtained revealed: a stochastic character of the PPTA fibril/fiber strength properties; a high level of sensitivity of the PPTA fibril/fiber mechanical properties to the presence, number density, clustering and potency of defects; and a reasonably good agreement between the predicted and the measured mechanical properties.

Originality/value

When quantifying the effect of crystallographic/morphological defects on the mechanical transverse behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account.

Details

Multidiscipline Modeling in Materials and Structures, vol. 9 no. 4
Type: Research Article
ISSN: 1573-6105

Keywords

Access Restricted. View access options
Article
Publication date: 1 October 2018

Aitin Saadatmeli, Mohamad Bameni Moghadam, Asghar Seif and Alireza Faraz

The purpose of this paper is to develop a cost model by the variable sampling interval and optimization of the average cost per unit of time. The paper considers an…

88

Abstract

Purpose

The purpose of this paper is to develop a cost model by the variable sampling interval and optimization of the average cost per unit of time. The paper considers an economic–statistical design of the X̅ control charts under the Burr shock model and multiple assignable causes were considered and compared with three types of prior distribution for the mean shift parameter.

Design/methodology/approach

The design of the modified X̅ chart is based on the two new concepts of adjusted average time to signal and average number of false alarms for X̅ control chart under Burr XII shock model with multiple assignable causes.

Findings

The cost model was examined through a numerical example, with the same cost and time parameters, so the optimal of design parameters were obtained under uniform and non-uniform sampling schemes. Furthermore, a sensitivity analysis was conducted in a way that the variability of loss cost and design parameters was evaluated supporting the changes of cost, time and Burr XII distribution parameters.

Research limitations/implications

The economic–statistical model scheme of X̅ chart was developed for the Burr XII distributed with multiple assignable causes. The correlated data are among the assumptions to be examined. Moreover, the optimal schemes for the economic-statistic chart can be expanded for correlated observation and continuous process.

Practical implications

The economic–statistical design of control charts depends on the process shock model distribution and due to difficulties from both theoretical and practical aspects; one of the proper alternatives may be the Burr XII distribution which is quite flexible. Yet, in Burr distribution context, only one assignable cause model was considered where more realistic approach may be to consider multiple assignable causes.

Originality/value

This study presents an advanced theoretical model for cost model that improved the shock model that presented in the literature. The study obviously indicates important evidence to justify the implementation of cost models in a real-life industry.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 11 September 2009

Rumena Stancheva, Ilona Iatcheva and Angel Angelov

The purpose of this paper is to introduce a method for evaluating the production tolerances influence on the practically realized optimal solution of electrotechnical devices. The…

175

Abstract

Purpose

The purpose of this paper is to introduce a method for evaluating the production tolerances influence on the practically realized optimal solution of electrotechnical devices. The influence is estimated by the optimal solution range defined with a given probability.

Design/methodology/approach

Because of the tolerances nature, the paper is in probabilistic categories. The accent is put on the cases when the mathematical description of the cost function is analytical, for example polynomial found on the basis of the design of experiments and response surface methodology. The optimal solution range is defined with a given probability. The governing equation is Chebychev's inequality. In some cases, Chebychev's inequality would be rather weak but the advantage is that it is valid for all kinds of probabilistic distributions.

Findings

A numerical example – an electrical machine – is considered with respect to variances in the magnetic characteristics of the stator and rotor core electrotechnical steel and tolerances in the geometrical dimensions of the machine. An analytical expression for the variance of the optimal solution is obtained in the case of a second order polynomial cost function. It is found that the energetic characteristic of the realized optimal design is expected to be negligibly different from its value in the proposed optimal project.

Originality/value

Although the example concerns the field of electrical machines, the methodology can be of interest for other domains and for different electrotechnical devices.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 28 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Access Restricted. View access options
Article
Publication date: 1 March 1996

M.L. Menéndez, L. Pardo, D. Morales and M. Salicrú

Presents (h, ø)‐entropies as a generalization of ø‐entropies. Studies some applications of this function in Bayesian inference, especially in the comparison of experiments. Also…

75

Abstract

Presents (h, ø)‐entropies as a generalization of ø‐entropies. Studies some applications of this function in Bayesian inference, especially in the comparison of experiments. Also studies the relationship of the (h,ø)‐entropy criterion to the classical approaches of Blackwell (1951) and Lehmann (1959).

Details

Kybernetes, vol. 25 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 1 January 1982

COLETTE PADET

In this paper, we propose to analyse the incidence of the limitations inherent to a process of observation τ on the entropy of a statistical continuous set of events observed by…

34

Abstract

In this paper, we propose to analyse the incidence of the limitations inherent to a process of observation τ on the entropy of a statistical continuous set of events observed by means of τ. We present here a study of a particular case when τ is submitted only to fluctuations (the sensitivity and the accuracy of the apparatus are assumed to be perfect). In this case, τ is characterized by its “reliability limit”.

Details

Kybernetes, vol. 11 no. 1
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 5 September 2017

Ijjou Tizgui, Fatima El Guezar, Hassane Bouzahir and Brahim Benaid

The purpose of this study is to select the most accurate and the most efficient method in estimating Weibull parameters at Agadir region in Morocco.

385

Abstract

Purpose

The purpose of this study is to select the most accurate and the most efficient method in estimating Weibull parameters at Agadir region in Morocco.

Design/methodology/approach

In this paper, Weibull distribution is used to model the wind speed in hourly time series format. Since several methods are used to adjust the Weibull distribution to the measured data, in reporting and analyzing the easiest and the most effective method, seven methods have been investigated, namely, the graphical method (GM), the maximum likelihood method (MLM), the empirical method of Justus (EMJ), the empirical method of Lysen (EML), the energy pattern factor method (EPFM), Mabchour’s method (MMab) and the method of moments (MM).

Findings

According to the statistical analysis tools (the coefficient of determination, root mean square error and chi-square test), it was found that for five months, the MLM presents more efficiency, and for four months, EMJ is ranked first and it is ranked second for February. To select only one method, the selected methods (MLM and EMJ) were compared by calculating the error in estimating the power density using Weibull distribution adjusted by those methods. The average error was found to be −0.51 and −4.56 per cent for MLM and EMJ, respectively.

Originality/value

This investigation is the first of its kind for the studied region. To estimate the available wind power at Agadir in Morocco, investors can directly use MLM to determine the Weibull parameters at this site.

Details

International Journal of Energy Sector Management, vol. 11 no. 4
Type: Research Article
ISSN: 1750-6220

Keywords

Access Restricted. View access options
Article
Publication date: 14 August 2009

Alex Yi‐Hou Huang and Tsung‐Wei Tseng

The purpose of this paper is to compare the performance of commonly used value at risk (VaR) estimation methods for equity indices from both developed countries and emerging…

1233

Abstract

Purpose

The purpose of this paper is to compare the performance of commonly used value at risk (VaR) estimation methods for equity indices from both developed countries and emerging markets.

Design/methodology/approach

In addition to traditional time‐series models, this paper examines the recently developed nonparametric kernel estimator (KE) approach to predicting VaR. KE methods model tail behaviors directly and independently of the overall return distribution, so are better able to take into account recent extreme shocks.

Findings

The paper compares the performance and reliability of five major VaR methodologies, using more than 26 years of return data on 37 equity indices. Through back‐testing of the resulting models on a moving window and likelihood ratio tests, it shows that KE models produce remarkably good VaR estimates and outperform the other common methods.

Practical implications

Financial assets are known to have irregular return patterns; not only the volatility but also the distributions themselves vary over time. This analysis demonstrates that a nonparametric approach (the KE method) can generate reliable VaR estimates and accurately capture the downside risk.

Originality/value

The paper evaluates the performance of several common VaR estimation approaches using a comprehensive sample of empirical data. The paper also reveals that kernel estimation methods can achieve remarkably reliable VaR forecasts. A detailed and complete investigation of nonparametric estimation methods will therefore significantly contribute to the understanding of the VaR estimation processes.

Details

The Journal of Risk Finance, vol. 10 no. 4
Type: Research Article
ISSN: 1526-5943

Keywords

Access Restricted. View access options
Article
Publication date: 15 June 2015

Qing Wang, Peng Huang, Jiangxiong Li and Yinglin Ke

The purpose of this paper is to propose an innovative method to extend the operating range of the laser tracking system and improve the accuracy and automation of boresighting by…

362

Abstract

Purpose

The purpose of this paper is to propose an innovative method to extend the operating range of the laser tracking system and improve the accuracy and automation of boresighting by designing a measurement instrument. Boresighting is a process that aligns the direction of special equipment with the aircraft reference axis. Sometimes the accurate measurement and adjustment of the equipment and the aircraft are hard to achieve.

Design/methodology/approach

The aircraft is moved by an automatic adjustment system which consists of three numerical control positioners. For obtaining the position of the bore axis, an instrument with two measurement points is designed. Based on the multivariate normal distribution hypothesis, an uncertainty evaluation method for the aiming points is introduced. The accuracy of the measurement point is described by an uncertainty ellipsoid. A compensation and calibration method is proposed to decrease the effect of manufacturing error and deflection error by the finite element analysis.

Findings

The experimental results of the boresighting measurement prove that the proposed method is effective and reliable in digital assembly. The measurement accuracy of the angle between the bore axis and the reference axis is about ±0.004°. In addition, the measurement result is mainly influenced by the position error of the instrument.

Originality/value

The results of this study will provide a new way to obtain and control the installation deviation of part in aircraft digital assembly and will help to improve the precision and efficiency. This measurement method can be applied to obtain the axis of a deep blind hole.

Details

Sensor Review, vol. 35 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Access Restricted. View access options
Article
Publication date: 12 October 2012

Hong‐lin Yang, Shou Chen and Yan Yang

The purpose of this paper is to reveal the multi‐scale relation between power law distribution and correlation of stock returns and to figure out the determinants underlying…

594

Abstract

Purpose

The purpose of this paper is to reveal the multi‐scale relation between power law distribution and correlation of stock returns and to figure out the determinants underlying capital markets.

Design/methodology/approach

The multi‐scale relation between power law distribution and correlation is investigated by comparing the original series with the special series. The eliminating intraday trend series approach developed by Liu et al. is utilized to analyze the effects of power law decay change on correlation properties, and shuffling series originated by Viswanathan et al. for the impacts of special type of correlation on power‐law distribution.

Findings

It is found that the accelerating decay of power law has an insignificant effect on correlation properties of returns and the empirical results indicate that time scale may also be an important factor maintaining power law property of returns besides correlation. When time scale is under critical point, the effects of correlation are crucial, and the correlation of nonlinear long‐range presents the strongest influence. However, for time scale beyond critical point, the impact of correlation begins to diminish or even finally disappear and then the power law property shows complete dependence on time scale.

Research limitations/implications

The 5‐min high frequency data of the Shanghai market as the empirical benchmark is insufficient to depict the relation over the entire time scale in the Chinese stock market.

Practical implications

The paper identifies the determinants of market dynamics to apply them to risk management through analysis of multi‐scale relations, and supports endeavors to introduce time parameter into further risk measures and control.

Originality/value

The paper provides the empirical evidence that time scale is one of the key determinants of market dynamics by analyzing the multi‐scale relation between power law distribution and correlation.

Details

Kybernetes, vol. 41 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Book part
Publication date: 1 June 2022

Luca Tiozzo Pezzoli and Elisa Tosetti

Seismometers continuously record a wide range of ground movements not caused by earthquake activity, but rather generated by human activities such as traffic, industrial machinery…

Abstract

Seismometers continuously record a wide range of ground movements not caused by earthquake activity, but rather generated by human activities such as traffic, industrial machinery functioning and industrial processes. In this Chapter we exploit seismic data to predict variations in Gross Domestic Product (GDP) for a set of States in the USA over the period from 2016 to 2021. We measure the noise generated at specific frequencies that are linked to human activity and use it as an indicator of economic activity. We show a remarkable reduction in seismic noise due to a slowdown in traffic and economic activities during the Corona economic crisis. Our results point at seismic data as a valuable source of information that can be used for monitoring regional and national economies.

Details

The Economics of COVID-19
Type: Book
ISBN: 978-1-80071-694-0

Keywords

Access Restricted. View access options
Article
Publication date: 1 February 1991

Rallis C. Papademetriou

The possibility of improving the quality of the minimum relative entropy spectral estimates by properly selecting the set of autocorrelation values is demonstrated. The study…

40

Abstract

The possibility of improving the quality of the minimum relative entropy spectral estimates by properly selecting the set of autocorrelation values is demonstrated. The study concentrates on two aspects: resolvability and accuracy of peak location. Several numerical examples are given.

Details

Kybernetes, vol. 20 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 1 March 1997

A.Z. Keller, M. Meniconi, I. Al‐Shammari and K. Cassidy

Data sets were compiled from the MHIDAS data bank for incidents where there had been five or more fatalities, ten or more injuries, 50 evacuations, or US$1 million damage. The…

786

Abstract

Data sets were compiled from the MHIDAS data bank for incidents where there had been five or more fatalities, ten or more injuries, 50 evacuations, or US$1 million damage. The data were converted to magnitudes on the Bradford Disaster Scale and analysed using maximum likelihood. Parameters determined from the estimation procedures were compared for compatibility between themselves and the results of analyses using other data.

Details

Disaster Prevention and Management: An International Journal, vol. 6 no. 1
Type: Research Article
ISSN: 0965-3562

Keywords

Access Restricted. View access options
Article
Publication date: 21 September 2022

Sifeng Liu and Wei Tang

The purpose of this paper is to explore new ways and lay a solid foundation to solve the problem of reliability growth analysis of major aerospace equipment with various…

232

Abstract

Purpose

The purpose of this paper is to explore new ways and lay a solid foundation to solve the problem of reliability growth analysis of major aerospace equipment with various uncertainty data through propose new concepts of general uncertainty data (GUD) and general uncertainty variable (GUV) and build the operation system of GUVs.

Design/methodology/approach

The characteristics of reliability growth data of major aerospace equipment and the limitations of current reliability growth models have been analyzed at first. The most commonly used uncertainty system analysis methods of probability statistics, fuzzy mathematics, grey system theory and rough set theory have been introduced. The concepts of GUD and GUV for reliability growth data analysis of major aerospace equipment are proposed. The simplified form of GUV based on the “kernel” and the degree of uncertainty of GUV is defined. Then an operation system of GUVs is built.

Findings

(1) The concept of GUD; (2) the concept of GUV; (3) The novel operation rules of GUVs with simplified form.

Practical implications

The method exposed in this paper can be used to integrate complex reliability growth data of major aerospace equipment. The reliability growth models based on GUV can be built for reliability growth evaluation and forecasting of major aerospace equipment in practice. The reliability evaluation example of a solid rocket motor shows that the concept and idea proposed in this paper are feasible. The research of this paper opens up a new way for the analysis of complex uncertainty data of reliability growth of major aerospace equipment. Moreover, the operation of GUVs could be extended to the case of algebraic equation, differential equation and matrix which including GUVs.

Originality/value

The new concepts of GUD and GUV are given for the first time. The novel operation rules of GUVs with simplified form were constructed.

Access Restricted. View access options
Article
Publication date: 1 July 2004

Alexandros M. Goulielmos

This article deals first in a theoretical fashion – a kind of a literature review – with the concept of randomness, as this appears in various disciplines. Second, an empirical…

579

Abstract

This article deals first in a theoretical fashion – a kind of a literature review – with the concept of randomness, as this appears in various disciplines. Second, an empirical approach is performed with actual data concerning marine accidents in the form of ships totally lost in two counts: ships lost per area and ships lost per month. The first appears non random and the latter is random! This finding is very crucial for the countries with the most dangerous areas, as well as for IMO. The test used for non‐randomness is the BDS statistic. The BDS statistic tests for the nonlinear dependence. The test proved randomness for monthly time series at both 95 per cent and 99 per cent confidence and non‐randomness for area data at the same confidence levels as above.

Details

Disaster Prevention and Management: An International Journal, vol. 13 no. 3
Type: Research Article
ISSN: 0965-3562

Keywords

Access Restricted. View access options
Article
Publication date: 14 March 2023

Qian Zhang and Huiyong Yi

With the evolution of the turbulent environment constantly triggering the emergence of a trust crisis between organizations, how can university–industry (U–I) alliances respond to…

178

Abstract

Purpose

With the evolution of the turbulent environment constantly triggering the emergence of a trust crisis between organizations, how can university–industry (U–I) alliances respond to the trust crisis when conducting green technology innovation (GTI) activities? This paper aims to address this issue.

Design/methodology/approach

The authors examined the process of trust crisis damage, including trust first suffering instantaneous impair as well as subsequently indirectly affecting GTI level, and ultimately hurting the profitability of green innovations. In this paper, a piecewise deterministic dynamic model is deployed to portray the trust and the GTI levels in GTI activities of U–I alliances.

Findings

The authors analyze the equilibrium results under decentralized and centralized decision-making modes to obtain the following conclusions: Trust levels are affected by a combination of hazard and damage (short and long term) rates, shifting from steady growth to decline in the presence of low hazard and damage rates. However, the GTI level has been growing steadily. It is essential to consider factors such as the hazard rate, the damage rate in the short and long terms, and the change in marginal profit in determining whether to pursue an efficiency- or recovery-friendly strategy in the face of a trust crisis. The authors found that two approaches can mitigate trust crisis losses: implementing a centralized decision-making mode (i.e. shared governance) and reducing pre-crisis trust-building investments. This study offers several insights for businesses and academics to respond to a trust crisis.

Research limitations/implications

The present research can be extended in several directions. Instead of distinguishing attribution of trust crisis, the authors use hazard rate, short- and long-term damage rates and change in marginal profitability to distinguish the scale of trust crises. Future scholars can further add an attribution approach to enrich the classification of trust crises. Moreover, the authors only consider trust crises because of unexpected events in a turbulent environment; in fact, a trust crisis may also be a plateauing process, yet the authors do not study this situation.

Practical implications

First, the authors explore what factors affect the level of trust and the level of GTI when a trust crisis occurs. Second, the authors provide guidelines on how businesses and academics can coordinate their trust-building and GTI efforts when faced with a trust crisis in a turbulent environment.

Originality/value

First, the interaction between psychology and innovation management is explored in this paper. Although empirical studies have shown that trust in U–I alliances is related to innovation performance, and scholars have developed differential game models to portray the GTI process, building a differential game model to explore such an interaction is still scarce. Second, the authors incorporate inter-organizational trust level into the GTI level in university–industry collaboration, applying differential equations to portray the trust building and GTI processes, respectively, to reveal the importance of trust in CTI activities. Third, the authors establish a piecewise deterministic dynamic game model wherein the impact of crisis shocks is not equal to zero, which is inconsistent with most previous studies of Brownian motion.

Details

Nankai Business Review International, vol. 15 no. 2
Type: Research Article
ISSN: 2040-8749

Keywords

Access Restricted. View access options
Article
Publication date: 13 March 2017

Prem Chhetri, Booi Kam, Kwok Hung Lau, Brian Corbitt and France Cheong

The purpose of this paper is to explore how a retail distribution network can be rationalised from a spatial perspective to improve service responsiveness and delivery efficiency.

3300

Abstract

Purpose

The purpose of this paper is to explore how a retail distribution network can be rationalised from a spatial perspective to improve service responsiveness and delivery efficiency.

Design/methodology/approach

This paper applies spatial analytics to examine variability of demand, both spatially and from a service delivery perspective, for an auto-parts retail network. Spatial analytics are applied to map the location of stores and customers to represent demand and service delivery patterns and to delineate market areas.

Findings

Results show significant spatial clustering in customer demand; whilst the delivery of products to customers, in contrast, is spatially dispersed. There is a substantial gap between revenue generated and costs. Market area analysis shows significant overlap, whereby stores compete with each other for business. In total, 80 per cent of customers can be reached within a 15-minute-radius, whilst only 20 per cent lies outside the market areas. Segmentation analysis of customers, based on service delivery, also shows the prevalence of the Pareto principle or 80:20 rule whereby 80 per cent of the revenue is generated by 20 per cent of customers.

Practical implications

Spatially integrated strategies are suggested to improve the efficiency of the retail network. It is recommended that less accessible and unprofitable customers could be either charged extra delivery cost or outsourced without the risk of a substantial reduction in revenue or quality of service delivery.

Originality/value

Innovative application of spatial analytics is used to analyse and visualise unit-record sales data to generate practical solutions to improve retail network responsiveness and operational efficiency.

Details

International Journal of Retail & Distribution Management, vol. 45 no. 3
Type: Research Article
ISSN: 0959-0552

Keywords

Access Restricted. View access options
Article
Publication date: 3 April 2017

Philipp Schäfer and Jens Hirsch

This study aims to analyze whether urban tourism affects Berlin housing rents. Urban tourism is of considerable economic importance for many urban destinations and has developed…

1725

Abstract

Purpose

This study aims to analyze whether urban tourism affects Berlin housing rents. Urban tourism is of considerable economic importance for many urban destinations and has developed very strongly over the past few years. The prevailing view is that urban tourism triggers side-effects, which affect the urban housing markets through a lack of supply and increasing rents. Berlin represents Germany’s largest rental market and is particularly affected by growing urban tourism and increasing rents.

Design/methodology/approach

The paper considers whether urban tourism hotspots affect Berlin’s housing rents, using two hedonic regression approaches, namely, conventional ordinary least squares (OLS) and generalized additive models (GAM). The regression models incorporate housing characteristics as well as several distance-based measures. The research considers tourist attractions, restaurants, hotels and holiday flats as constituents of tourism hotspots and is based on a spatial analysis using geographic information systems (GIS).

Findings

The results can be regarded as a preliminary indication that rents are, indeed, affected by urban tourism. Rents seem to be positively correlated with the touristic attractiveness of a particular location, even if it is very difficult to accurately measure the real quantity of the respective effects of the urban tourism amenities, as the various models show. GAM outperforms the results of OLS and seems to be more appropriate for spatial analysis of rents across a city.

Originality/value

To the best of the authors’ knowledge, the paper provides the first empirical analysis of the effects of urban tourism hotspots on the Berlin housing market.

Details

International Journal of Housing Markets and Analysis, vol. 10 no. 2
Type: Research Article
ISSN: 1753-8270

Keywords

Access Restricted. View access options
Article
Publication date: 31 January 2022

Simone Massulini Acosta and Angelo Marcio Oliveira Sant'Anna

Process monitoring is a way to manage the quality characteristics of products in manufacturing processes. Several process monitoring based on machine learning algorithms have been…

657

Abstract

Purpose

Process monitoring is a way to manage the quality characteristics of products in manufacturing processes. Several process monitoring based on machine learning algorithms have been proposed in the literature and have gained the attention of many researchers. In this paper, the authors developed machine learning-based control charts for monitoring fraction non-conforming products in smart manufacturing. This study proposed a relevance vector machine using Bayesian sparse kernel optimized by differential evolution algorithm for efficient monitoring in manufacturing.

Design/methodology/approach

A new approach was carried out about data analysis, modelling and monitoring in the manufacturing industry. This study developed a relevance vector machine using Bayesian sparse kernel technique to improve the support vector machine used to both regression and classification problems. The authors compared the performance of proposed relevance vector machine with other machine learning algorithms, such as support vector machine, artificial neural network and beta regression model. The proposed approach was evaluated by different shift scenarios of average run length using Monte Carlo simulation.

Findings

The authors analyse a real case study in a manufacturing company, based on best machine learning algorithms. The results indicate that proposed relevance vector machine-based process monitoring are excellent quality tools for monitoring defective products in manufacturing process. A comparative analysis with four machine learning models is used to evaluate the performance of the proposed approach. The relevance vector machine has slightly better performance than support vector machine, artificial neural network and beta models.

Originality/value

This research is different from the others by providing approaches for monitoring defective products. Machine learning-based control charts are used to monitor product failures in smart manufacturing process. Besides, the key contribution of this study is to develop different models for fault detection and to identify any change point in the manufacturing process. Moreover, the authors’ research indicates that machine learning models are adequate tools for the modelling and monitoring of the fraction non-conforming product in the industrial process.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 1 August 2006

Thorsten Blecker and Nizar Abdelkafi

To identify and examine the origins of complexity in a mass customization system and to propose an effective application sequence of variety management strategies in order to cope…

5528

Abstract

Purpose

To identify and examine the origins of complexity in a mass customization system and to propose an effective application sequence of variety management strategies in order to cope with this complexity.

Design/methodology/approach

Through the application of Suh's complexity theory an understanding of the causes of complexity in the specific context of a mass customization environment is developed. This facilitates the identification of the strategies that are adequate to tackle the problems induced by complexity.

Findings

The mass customization system is a coupled system that cannot be mastered simply. It is definitely impossible to transform it to an uncoupled system with a low complexity level. However, the effective and targeted implementation of variety management strategies at the product and process levels enables the management of this complexity by making the system more decoupled.

Practical implications

Complexity can be decreased if managers ensure less dependency between the satisfaction of customer requirements and position of the decoupling point. It is also advantageous to reduce the coupling level between fast delivery requirement in mass customization and the decoupling point placement. Furthermore, an effective variety management calls for the implementation of the identified strategies in an ascending order of complexity reduction potential.

Originality/value

The article relates the complexity theory of Suh to mass customization system, provides a framework for the classification of variety management strategies and derives managerial recommendations so as to reduce the complexity in a mass customization environment.

Details

Management Decision, vol. 44 no. 7
Type: Research Article
ISSN: 0025-1747

Keywords

Abstract

Details

Transportation and Traffic Theory in the 21st Century
Type: Book
ISBN: 978-0-080-43926-6

Access Restricted. View access options
Article
Publication date: 11 November 2013

Lance Nizami

A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and…

202

Abstract

Purpose

A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for “absolute identification” experiments. There, human subjects “categorize” sensory stimuli, affording “information transmitted” in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the “why” is answered. The paper aims to discuss these issues.

Design/methodology/approach

A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by “symbol sent” (event) and “symbol received” (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here.

Findings

The Garner-Hake confusion-matrix columns represent “stimulus categories”, ranges of some physical stimulus attribute (usually intensity), and its rows represent “response categories” of the subject's identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that “outcomes” and “events” differ in kind (unlike Shannon's). Obtaining a true “information transmitted” therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable.

Originality/value

The author achieves an important understanding: why “absolute identification” experiments do not illuminate sensory processes.

Details

Kybernetes, vol. 42 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Available. Open Access. Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

3198

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Access Restricted. View access options
Article
Publication date: 1 March 2011

Wayne Journell

One of the fundamental tenets of social studies education is preparing students to become knowledgeable and informed citizens. Especially in this era of increased communication…

12

Abstract

One of the fundamental tenets of social studies education is preparing students to become knowledgeable and informed citizens. Especially in this era of increased communication and technology, one skill necessary for informed citizenship is the ability to critically understand polling data. Social studies educators, however, rarely provide their students with the mathematical framework required to move beyond face-value analysis of public opinion polls. This article outlines the basic statistical processes behind public opinion polls and provides social studies teachers with activities that encourage students to critically question political data presented in the media.

Details

Social Studies Research and Practice, vol. 6 no. 1
Type: Research Article
ISSN: 1933-5415

Keywords

Access Restricted. View access options
Article
Publication date: 5 March 2018

Xiwen Cai, Haobo Qiu, Liang Gao, Xiaoke Li and Xinyu Shao

This paper aims to propose hybrid global optimization based on multiple metamodels for improving the efficiency of global optimization.

249

Abstract

Purpose

This paper aims to propose hybrid global optimization based on multiple metamodels for improving the efficiency of global optimization.

Design/methodology/approach

The method has fully utilized the information provided by different metamodels in the optimization process. It not only imparts the expected improvement criterion of kriging into other metamodels but also intelligently selects appropriate metamodeling techniques to guide the search direction, thus making the search process very efficient. Besides, the corresponding local search strategies are also put forward to further improve the optimizing efficiency.

Findings

To validate the method, it is tested by several numerical benchmark problems and applied in two engineering design optimization problems. Moreover, an overall comparison between the proposed method and several other typical global optimization methods has been made. Results show that the global optimization efficiency of the proposed method is higher than that of the other methods for most situations.

Originality/value

The proposed method sufficiently utilizes multiple metamodels in the optimizing process. Thus, good optimizing results are obtained, showing great applicability in engineering design optimization problems which involve costly simulations.

Details

Engineering Computations, vol. 35 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Book part
Publication date: 25 July 1997

Les Gulko

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

1 – 50 of over 10000
Per page
102050