Search results

1 – 20 of over 10000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Book part
Publication date: 14 July 2006

Duangkamon Chotikapanich and William E. Griffiths

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and…

Abstract

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and Duclos (2000) and references therein. Such tests are useful for assessing progress towards eliminating poverty and for evaluating the effectiveness of various policy initiatives directed towards welfare improvement. To date the focus in the literature has been on sampling theory tests. Such tests can be set up in various ways, with dominance as the null or alternative hypothesis, and with dominance in either direction (X dominates Y or Y dominates X). The result of a test is expressed as rejection of, or failure to reject, a null hypothesis. In this paper, we develop and apply Bayesian methods of inference to problems of Lorenz and stochastic dominance. The result from a comparison of two income distributions is reported in terms of the posterior probabilities for each of the three possible outcomes: (a) X dominates Y, (b) Y dominates X, and (c) neither X nor Y is dominant. Reporting results about uncertain outcomes in terms of probabilities has the advantage of being more informative than a simple reject/do-not-reject outcome. Whether a probability is sufficiently high or low for a policy maker to take a particular action is then a decision for that policy maker.

The methodology is applied to data for Canada from the Family Expenditure Survey for the years 1978 and 1986. We assess the likelihood of dominance from one time period to the next. Two alternative assumptions are made about the income distributions – Dagum and Singh-Maddala – and in each case the posterior probability of dominance is given by the proportion of times a relevant parameter inequality is satisfied by the posterior observations generated by Markov chain Monte Carlo.

Details

Dynamics of Inequality and Poverty
Type: Book
ISBN: 978-0-76231-350-1

Access Restricted. View access options
Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

120

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

Access Restricted. View access options
Article
Publication date: 23 January 2019

Rakesh Ranjan, Subrata Kumar Ghosh and Manoj Kumar

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and…

188

Abstract

Purpose

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and modelled. The paper aims to find an appropriate probability distribution model to forecast the kind of wear particles at different running hour of the machine.

Design/methodology/approach

Used gear oil of the planetary gear box of a slab caster was drained out and charged with a fresh oil of grade (EP-460). Six chronological oil samples were collected at different time interval between 480 and 1,992 h of machine running. The oil samples were filtered to separate wear particles, and microscopic study of wear debris was carried out at 100X magnification. Statistical modelling of wear debris distribution was done using Weibull and exponential probability distribution model. A comparison was studied among actual, Weibull and exponential probability distribution of major length and aspect ratio of wear particles.

Findings

Distribution of major length of wear particle was found to be closer to the exponential probability density function, whereas Weibull probability density function fitted better to distribution of aspect ratio of wear particle.

Originality/value

The potential of the developed model can be used to analyse the distribution of major length and aspect ratio of wear debris present in planetary gear box of slab caster machine.

Details

Industrial Lubrication and Tribology, vol. 71 no. 2
Type: Research Article
ISSN: 0036-8792

Keywords

Access Restricted. View access options
Article
Publication date: 5 February 2018

Damaris Serigatto Vicentin, Brena Bezerra Silva, Isabela Piccirillo, Fernanda Campos Bueno and Pedro Carlos Oprime

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

514

Abstract

Purpose

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

Design/methodology/approach

Data were collected during production of a wheat-based dough in a food industry and the control charts were developed with these steps: to collect the master sample from different production batches; to verify, by graphical methods, the quantity and the characterization of the number of mixing probability distributions in the production batch; to adjust the theoretical model of probability distribution of each subpopulation in the production batch; to make a statistical model considering the mixture distribution of probability and assuming that the statistical parameters are unknown; to determine control limits; and to compare the mixture chart with traditional control chart.

Findings

A graph was developed for monitoring a multi-stream process composed by some parameters considered in its calculation with similar efficiency to the traditional control chart.

Originality/value

The control chart can be an efficient tool for customers that receive product batches continuously from a supplier and need to monitor statistically the critical quality parameters.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 1 August 2005

Degan Zhang, Guanping Zeng, Enyi Chen and Baopeng Zhang

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the…

212

Abstract

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the movement or shift of the user, its uncertainty often exists. Context‐aware computing with uncertainty includes obtaining context information, forming model, fusing of aware context and managing context information. In this paper, we focus on modeling and computing of aware context information with uncertainty for making dynamic decision during seamless mobility. Our insight is to combine dynamic context‐aware computing with improved Random Set Theory (RST) and extended D‐S Evidence Theory (EDS). We re‐examine formalism of random set, argue the limitations of the direct numerical approaches, give new modeling mode based on RST for aware context and propose our computing approach of modeled aware context.In addition, we extend classic D‐S Evidence Theory after considering context’s reliability, time‐efficiency and relativity, compare relative computing methods. After enumerating experimental examples of our active space, we provide the evaluation. By comparisons, the validity of new context‐aware computing approach based on RST or EDS for ubiquitous active service with uncertainty information has been successfully tested.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Access Restricted. View access options
Book part
Publication date: 24 January 2022

Eleonora Pantano and Kim Willems

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to…

Abstract

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to maximize the number of visitors they attract in order to optimize returns and profits. On the other hand, ensuring a pleasurable, efficient and COVID-19-proof shopping experience, would go against an excessive concentration of shoppers. Fulfilling both retailer and consumer perspectives requires a delicate balance to be struck. This chapter aims at supporting retailers in making informed decisions, by clarifying the extent to which store layouts influence (perceived) consumer density. Specifically, the chapter illustrates how new technologies and methodologies (i.e. agent-based simulation) can help in predicting a store layout's ability to reduce consumers' perceived in-store spatial density and related perceptions of human crowding, while also ensuring a certain retailers' profitability.

Access Restricted. View access options
Article
Publication date: 2 March 2012

G. Mora and J.C. Navarro

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

208

Abstract

Purpose

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

Design/methodology/approach

This will be done by using a basic α‐densification technique and adding the new concept of sequence of α‐dense curves with accumulated density to improve the resolution of some global optimization problems.

Findings

It is found that the new technique based on sequences of α‐dense curves with accumulated densities allows to simplify considerably the process consisting on the exploration of the set of optimizer points of an objective function with feasible set a parallelepiped K of RN. Indeed, since the sequence of the images of the curves of a sequence of α‐dense curves with accumulated density is expansive, in each new step of the algorithm it is only necessary to explore a residual subset. On the other hand, since the sequence of their densities is decreasing and tends to zero, the convergence of the algorithm is assured.

Practical implications

The results of this new technique of densification by sequences of α‐dense curves with accumulated densities will be applied to densify the feasible set of an objective function which minimizes the quadratic error produced by the adjustment of a model based on a beta probability density function which is largely used in studies on the transition‐time of forest vegetation.

Originality/value

A sequence of α‐dense curves with accumulated density represents an original concept to be added to the set of techniques to optimize a multivariable function by the reduction to only one variable as a new application of α‐dense curves theory to the global optimization.

Access Restricted. View access options
Article
Publication date: 18 July 2019

Zahid Hussain Hulio and Wei Jiang

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density

191

Abstract

Purpose

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density collected over a period of one year.

Design/methodology/approach

The site-specific air density, wind shear, wind power density, annual energy yield and capacity factors have been calculated at 30 and 10 m above the ground level (AGL). The Weibull parameters have been calculated using empirical, maximum likelihood, modified maximum likelihood, energy pattern and graphical methods to determine the other dependent parameters. The accuracies of these methods are determined using correlation coefficient (R²) and root mean square error (RMSE) values.

Findings

The site-specific wind shear coefficient was found to be 0.18. The annual mean wind speeds were found to be 5.174 and 4.670 m/s at 30 and 10 m heights, respectively, with corresponding standard deviations of 2.085 and 2.059. The mean wind power densities were found to be 59.50 and 46.75 W/m² at 30 and 10 m heights, respectively. According to the economic assessment, the wind turbine A is capable of producing wind energy at the lowest value of US$ 0.034/kWh.

Practical implications

This assessment provides the sustainable solution of energy which minimizes the dependence on continuous supply of oil and gas to run the conventional power plants that is a major cause of increasing load shedding in the significant industrial and thickly populated city of Pakistan. Also, this will minimize the quarrel between the local power producer and oil and gas supplier during the peak season.

Social implications

This wind resource assessment has some important social implications including decreasing the environmental issues, enhancing the uninterrupted supply of electricity and decreasing cost of energy per kWh for the masses of Karachi.

Originality/value

The results are showing that the location can be used for installing the wind energy power plant at the lower cost per kWh compared to other energy sources. The wind energy is termed as sustainable solution at the lowest cost.

Details

International Journal of Energy Sector Management, vol. 14 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

Access Restricted. View access options
Article
Publication date: 1 April 1993

Guy Jumarie

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the…

74

Abstract

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the informational entropy of the system. The concept of informational entropy of deterministic functions provides a straightforward modelling of Brillouin's negentropy (negative entropy), therefore a system can be characterized by its complexity and its dual complexity. States composition laws for complexities expressed in terms of Shannonian entropy with or without probability, and then the approach is extended to quantum entropy of non‐probabilistic data. Outlines some suggestions for future investigation.

Details

Kybernetes, vol. 22 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Available. Content available
Article
Publication date: 17 October 2023

Zhixun Wen, Fei Li and Ming Li

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue…

616

Abstract

Purpose

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue life on this basis. The crack propagation law of SX material at different temperatures and the weak correlation of EIFS values verification under different loading conditions are also investigated.

Design/methodology/approach

A three-parameter time to crack initial (TTCI) method with multiple reference crack lengths under different loading conditions is established, which include the TTCI backstepping method and EIFS fitting method. Subsequently, the optimized EIFS distribution is obtained based on the random crack propagation rate and maximum likelihood estimation of median fatigue life. Then, an effective driving force based on anisotropic and mixed crack propagation mode is proposed to describe the crack propagation rate in the small crack stage. Finally, the fatigue life of three different temperature ESE(T) standard specimens is predicted based on the EIFS values under different survival rates.

Findings

The optimized EIFS distribution based on EIFS fitting - maximum likelihood estimation (MLE) method has the highest accuracy in predicting the total fatigue life, with the range of EIFS values being about [0.0028, 0.0875] (mm), and the mean value of EIFS being 0.0506 mm. The error between the predicted fatigue life based on the crack propagation rate and EIFS distribution for survival rates ranges from 5% to 95% and the experimental life is within two times dispersion band.

Originality/value

This paper systematically proposes a new anisotropic material EIFS prediction method, establishing a framework for predicting the fatigue life of SX material at different temperatures using fracture mechanics to avoid inaccurate anisotropic constitutive models and fatigue damage accumulation theory.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 6
Type: Research Article
ISSN: 1573-6105

Keywords

Access Restricted. View access options
Article
Publication date: 7 September 2015

M. Navabi and R. Hamrah

The purpose of this paper is to perform a comparative study of two propagation models and a prediction of proximity distances among the space objects based on the two-line element…

488

Abstract

Purpose

The purpose of this paper is to perform a comparative study of two propagation models and a prediction of proximity distances among the space objects based on the two-line element set (TLEs) data, which identifies potentially risky approaches and is used to compute the probability of collision among the spacecrafts.

Design/methodology/approach

At first, the proximities are estimated for the mentioned satellites using a precise propagation model and based on a one-month simulation. Then, a study is performed to determine the probability of collision between two satellites using a formulation which takes into account the object sizes, covariance data and the relative distance at the point of closest approach. Simplifying assumptions such as a linear relative motion and normally distributed position uncertainties at the predicted closest approach time are applied in estimation.

Findings

For the case of Iridium-Cosmos collision and the prediction of a closest approach using available TLE orbital data and a propagation model which takes into account the effects of the earth’s zonal harmonics and drag atmospheric, the maximum probability of about 2 × 10 −6 was obtained, which can indicate the necessity of enacting avoidance maneuvers regarding the defined a probability threshold by satellite’s owner.

Originality/value

The contribution of this paper is to analyze and simulate the 2009 prominent collision between the Cosmos2251 and Iridium33 satellite by modeling their orbit propagation, predicting their closest approaches and, finally, assessing the risk of the possible collision. Moreover, an enhanced orbit determination can be effective to achieve an accurate assessment of the ongoing collision threat to active spacecrafts from orbital debris and preventing, if necessary, the hazards thereof.

Details

Aircraft Engineering and Aerospace Technology: An International Journal, vol. 87 no. 5
Type: Research Article
ISSN: 0002-2667

Keywords

Access Restricted. View access options
Article
Publication date: 12 September 2008

N.N. Puscas

The purpose of this paper is to propose modelling of the noise of an improved method for the measurement of small displacement and vibrations. It is based on a novel method for…

185

Abstract

Purpose

The purpose of this paper is to propose modelling of the noise of an improved method for the measurement of small displacement and vibrations. It is based on a novel method for overcoming DC drift in RF subcarrier phase detection scheme.

Design/methodology/approach

The method works in open loop and is characterized by low distortions, good signal‐to‐noise ratio and rather low cost.

Findings

Considering a stationary Gaussian stochastic process, the paper evaluated and modelled the power spectral density and the probability density against the phase error and the phase noise parameter.

Practical implications

This offers an improvement of vibration, displacement and seismic sensors.

Originality/value

Based on a novel method for overcoming DC drift in RF sub‐carrier phase detection scheme for fibre optic sensors, an improved method for displacement and vibration measurement is proposed. The presented method is characterized by rather low distortions in the modulation process and small distances measurement, low distortions and low‐cost electronic systems.

Details

Sensor Review, vol. 28 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Access Restricted. View access options
Article
Publication date: 8 July 2021

Zahid Hussain Hulio, Gm Yousufzai and Wei Jiang

Pakistan is an energy starving country that needs continuous supply of energy to keep up its economic speed. The aim of this paper is to assess the wind resource and energy…

97

Abstract

Purpose

Pakistan is an energy starving country that needs continuous supply of energy to keep up its economic speed. The aim of this paper is to assess the wind resource and energy potential of Quaidabad site for minimizing the dependence on fuels and improving the environment.

Design/methodology/approach

The Quaidabad site wind shear coefficient and turbulence intensity factor are investigated. The two-parameter k and c Weibull distribution function is used to analyze the wind speed of site. The standard deviation of the site is also assessed for a period of a year. The wind power density and energy density are assessed for a period of a year. The economic assessment of energy/kWh is investigated for selection of appropriate wind turbine.

Findings

The mean wind shear coefficient was observed to be 0.2719, 0.2191 and 0.1698 at 20, 40 and 60 m, respectively, for a period of a year. The mean wind speed is found to be 2.961, 3.563, 3.907 and 4.099 m/s at 20, 40, 60 and 80 m, respectively. The mean values of k parameters were observed to be 1.563, 2.092, 2.434 and 2.576 at 20, 40, 60 and 80 m, respectively, for a period of a year. The mean values of c m/s parameter were found to be 3.341, 4.020, 4.408 and 4.625 m/s at 20, 40, 60 and 80 m, respectively, for a period of a year. The major portion of values of standard deviation was found to be in between 0.1 and 2.00 at 20, 40, 60 and 80 m. The wind power density (W/m2) sum total values were observed to be 351, 597, 792 and 923 W/m2 at 20, 40, 60 and 80 m, respectively, for a period of a year. The mean coefficient of variation was found to be 0.161, 0.130, 0.115 and 0.105 at 20, 40, 60 and 80 m, respectively. The sum total energy density was observed to be 1,157, 2,156, 2,970 and 3,778 kWh/m2 at 20, 40, 60 and 80 m, respectively. The economic assessment is showing that wind turbine E has the minimum cost US$0.049/kWh.

Originality/value

The Quaidabad site is suitable for installing the utility wind turbines for energy generation at the lowest cost.

Access Restricted. View access options
Article
Publication date: 26 August 2014

Bruce J. Sherrick, Christopher A. Lanoue, Joshua Woodard, Gary D. Schnitkey and Nicholas D. Paulson

The purpose of this paper is to contribute to the empirical evidence about crop yield distributions that are often used in practical models evaluating crop yield risk and…

564

Abstract

Purpose

The purpose of this paper is to contribute to the empirical evidence about crop yield distributions that are often used in practical models evaluating crop yield risk and insurance. Additionally, a simulation approach is used to compare the performance of alternative specifications when the underlying form is not known, to identify implications for the choice of parameterization of yield distributions in modeling contexts.

Design/methodology/approach

Using a unique high-quality farm-level corn yield data set, commonly used parametric, semi-parametric, and non-parametric distributions are examined against widely used in-sample goodness-of-fit (GOF) measures. Then, a simulation framework is used to assess the out-of-sample characteristics by using known distributions to generate samples that are assessed in an insurance valuation context under alternative specifications of the yield distribution.

Findings

Bias and efficiency trade-offs are identified for both in- and out-of-sample contexts, including a simple insurance rating application. Use of GOF measures in small samples can lead to inappropriate selection of candidate distributions that perform poorly in straightforward economic applications. The β distribution consistently overstates rates even when fitted to data generated from a β distribution, while the Weibull consistently understates rates; though small sample features slightly favor Weibull. The TCMN and kernel density estimators are least biased in-sample, but can perform very badly out-of-sample due to overfitting issues. The TCMN performs reasonably well across sample sizes and initial conditions.

Practical implications

Economic applications should consider the consequence of bias vs efficiency in the selection of characterizations of yield risk. Parsimonious specifications often outperform more complex characterizations of yield distributions in small sample settings, and in cases where more demanding uses of extreme-event probabilities are required.

Originality/value

The study helps provide guidance on the selection of distributions used to characterize yield risk and provides an extensive empirical demonstration of yield risk measures across a high-quality set of actual farm experiences. The out-of-sample examination provides evidence of the impact of sample size, underlying variability, and region of the probability measure used on the performance of candidate distributions.

Details

Agricultural Finance Review, vol. 74 no. 3
Type: Research Article
ISSN: 0002-1466

Keywords

Access Restricted. View access options
Book part
Publication date: 15 January 2010

Isobel Claire Gormley and Thomas Brendan Murphy

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys…

Abstract

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys. Covariate data associated with the judges are also often recorded. Such covariate data should be used in conjunction with preference data when drawing inferences about judges.

To cluster a population of judges, the population is modeled as a collection of homogeneous groups. The Plackett-Luce model for ranked data is employed to model a judge's ranked preferences within a group. A mixture of Plackett- Luce models is employed to model the population of judges, where each component in the mixture represents a group of judges.

Mixture of experts models provide a framework in which covariates are included in mixture models. Covariates are included through the mixing proportions and the component density parameters. A mixture of experts model for ranked preference data is developed by combining a mixture of experts model and a mixture of Plackett-Luce models. Particular attention is given to the manner in which covariates enter the model. The mixing proportions and group specific parameters are potentially dependent on covariates. Model selection procedures are employed to choose optimal models.

Model parameters are estimated via the ‘EMM algorithm’, a hybrid of the expectation–maximization and the minorization–maximization algorithms. Examples are provided through a menu survey and through Irish election data. Results indicate mixture modeling using covariates is insightful when examining a population of judges who express preferences.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Access Restricted. View access options
Article
Publication date: 1 August 1997

Rick L. Edgeman and Dennis K.J. Lin

Acceptance sampling can be both time‐consuming and destructive so that it is desirable to arrive at a sound lot disposition decision in a timely manner. Sequential sampling plans…

407

Abstract

Acceptance sampling can be both time‐consuming and destructive so that it is desirable to arrive at a sound lot disposition decision in a timely manner. Sequential sampling plans are attractive since they offer a lower average sample number than do matched single, double, or multiple sampling plans. Analogously, cumulative sum control charts offer the ability to detect moderate process shifts more rapidly than do Shewhart control charts applied to the same process. The inverse Gaussian distribution is flexible and is often the model of choice in accelerated life testing applications where early failure times predominate. Based on sequential probability ratio tests (SPRT), sequential sampling/ cumulative sum (CUSUM) plans provide timely, statistically based decisions. Presents SPRT and CUSUM results for the inverse Gaussian process mean. Also presents a simple goodness‐of‐fit test for the inverse Gaussian distribution which allows for model adequacy checking.

Details

International Journal of Quality & Reliability Management, vol. 14 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 8 December 2020

Zahid Hussain Hulio

The objective of this paper to assess the wind energy potential of the Sujawal site for minimizing the dependence on fossil fuels.

105

Abstract

Purpose

The objective of this paper to assess the wind energy potential of the Sujawal site for minimizing the dependence on fossil fuels.

Design/methodology/approach

The site-specific wind shear coefficient and the turbulence model were investigated. The two-parameter, k and c, Weibull distribution function was used to analyze the wind speed of the Sujawal site. The standard deviation of the site was also assessed for a period of a year. Also, the coefficient of variation was carried out to determine the difference at each height. The wind power and energy densities were assessed for a period of a year. The economic assessment of energy/kWh was investigated for selection of appropriate wind turbine.

Findings

The mean wind shear of the Sujawal site was found to be 0.274. The mean wind speed was found to be 7.458, 6.911, 6.438 and 5.347 at 80, 60, 40 and 20 m, respectively, above the ground level (AGL). The mean values of k parameter were observed to be 2.302, 2.767, 3.026 and 3.105 at 20, 40, 60 and 80 m, respectively, for a period of a year. The Weibull c m/s parameter values were found to be 8.415, 7.797, 7.265 and 6.084 m/s at 80, 60, 40 and 20 m, respectively. The mean values of standard deviation were found to be 0.765, 0.737, 0.681 and 0.650 at 20, 40, 60, and 80 m, respectively. The mean wind power density (W/m2) was found to be 287.33, 357.16, 405.16 and 659.58 for 20, 40, 60 and 80 m, respectively. The economic assessment showed that wind turbine 7 had the minimum cost/kWh US$ 0.0298.

Originality/value

The Sujawal site is suitable for installing the utility wind turbines for energy generation at the lowest cost; hence, a sustainable solution.

Details

World Journal of Science, Technology and Sustainable Development, vol. 18 no. 1
Type: Research Article
ISSN: 2042-5945

Keywords

Available. Open Access. Open Access
Article
Publication date: 20 July 2020

Lijuan Shi, Zuoning Jia, Huize Sun, Mingshu Tian and Liquan Chen

This paper aims to study the affecting factors on bird nesting on electronic railway catenary lines and the impact of bird nesting events on railway operation.

3368

Abstract

Purpose

This paper aims to study the affecting factors on bird nesting on electronic railway catenary lines and the impact of bird nesting events on railway operation.

Design/methodology/approach

First, with one year’s bird nest events in the form of unstructured natural language collected from Shanghai Railway Bureau, the records were structured with the help of python software tool. Second, the method of root cause analysis (RCA) was used to identify all the possible influencing factors which are inclined to affect the probability of bird nesting. Third, the possible factors then were classified into two categories to meet subsequent analysis separately, category one was outside factors (i.e. geographic conditions related factors), the other was inside factors (i.e. railway related factors).

Findings

It was observed that factors of city population, geographic position affect nesting observably. Then it was demonstrated that both location and nesting on equipment part have no correlation with delay, while railway type had a significant but low correlation with delay.

Originality/value

This paper discloses the principle of impacts of nest events on railway operation.

Details

Smart and Resilient Transportation, vol. 2 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Available. Open Access. Open Access
Article
Publication date: 6 January 2023

RS. Koteeshwari and B. Malarkodi

Among the proposed radio access strategies for improving system execution in 5G networks, the non-orthogonal multiple access (NOMA) scheme is the prominent one.

1162

Abstract

Purpose

Among the proposed radio access strategies for improving system execution in 5G networks, the non-orthogonal multiple access (NOMA) scheme is the prominent one.

Design/methodology/approach

Among the most fundamental NOMA methods, power-domain NOMA is the one where at the transmitter, superposition coding is used, and at the receiver, successive interference cancellation (SIC) is used. The importance of power allocation (PA) in achieving appreciable SIC and high system throughput cannot be overstated.

Findings

This research focuses on an outage probability analysis of NOMA downlink system under various channel conditions like Rayleigh, Rician and Nakagami-m fading channel. The system design's objectives, techniques and constraints for NOMA-based 5G networks' PA strategies are comprehensively studied.

Practical implications

From the results of this study, it is found that the outage probability performance of downlink ordered NOMA under Rayleigh, Rician and Nakagami-m fading channel was good.

Originality/value

Outage probability analysis of downlink ordered NOMA under various channel conditions like Rayleigh, Rician and Nakagami-m fading channels were employed. Though the performance of Nakagami-m fading channel is lesser compared to Rayleigh channel, the performance for user 1 and user 2 are good.

Details

Arab Gulf Journal of Scientific Research, vol. 41 no. 4
Type: Research Article
ISSN: 1985-9899

Keywords

Access Restricted. View access options
Article
Publication date: 11 January 2022

Wei Yang, Afshin Firouzi and Chun-Qing Li

The purpose of this paper is to demonstrate the applicability of the Credit Default Swaps (CDS), as a financial instrument, for transferring of risk in project finance loans…

248

Abstract

Purpose

The purpose of this paper is to demonstrate the applicability of the Credit Default Swaps (CDS), as a financial instrument, for transferring of risk in project finance loans. Also, an equation has been derived for pricing of CDS spreads.

Design/methodology/approach

The debt service cover ratio (DSCR) is modeled as a Brownian Motion (BM) with a power-law model fitted to the mean and half-variance of the existing data set of DSCRs. The survival probability of DSCR is calculated during the operational phase of the project finance deal, using a closed-form analytical method, and the results are verified by Monte Carlo simulation (MCS).

Findings

It is found that using the power-law model yields higher CDS premiums. This in turn confirms the necessity of conducting rigorous statistical analysis in fitting the best performing model as uninformed reliance on constant time-invariant drift and diffusion model can erroneously result in smaller CDS spreads. A sensitivity analysis also shows that the results are very sensitive to the recovery rate and cost of debt values.

Originality/value

Insufficiency of free cash flow is a major risk in the toll road project finance and hence there is a need to develop innovative financial instruments for risk management. In this paper, a novel valuation method of CDS is proposed assuming that DSCR follows the BM stochastic process.

Details

Journal of Financial Management of Property and Construction , vol. 28 no. 1
Type: Research Article
ISSN: 1366-4387

Keywords

1 – 20 of over 10000
Per page
102050