P. Mirosavljević, S. Gvozdenović and O. Čokorilo
The purpose of this paper is to define minimum cost technique of turbo fan transport aircraft in the presence of dynamic change of aircraft performance. Results can be practical…
Abstract
Purpose
The purpose of this paper is to define minimum cost technique of turbo fan transport aircraft in the presence of dynamic change of aircraft performance. Results can be practical applicable in airlines for achieving minimal operation costs.
Design/methodology/approach
Logarithmic differential is applied for defining conditions in order to achieve optimal Mach number for minimal climb cost. This condition is solved numerically by using Newton‐Ramphson method, to obtain optimal Mach number distribution with altitude. Conclusion about optimal top of climb (TOC) is defined after analyses for different aircraft mass and cost indexes.
Findings
Proposed method of minimum cost climb resulting in potential savings up to 5 per cent compared to Aircraft Flight Manual climb law. Proposed method also made correction of climb law and optimal TOC under existence of aircraft performance degradation.
Practical implications
Use of defined climb law and optimal TOC will minimize cost of en route flight profile.
Originality/value
At present, there is no definition of climb technique for minimum cost of en route flight profile, under dynamic degradation of aircraft performance. Final results are standardized to become applicable and easy to use with modern and old type of flight management system.
Details
Keywords
Patient experience is a complex multidimensional phenomenon that has been linked to constructs that are also complex to conceptualize, such as patient-centeredness, patient…
Abstract
Purpose
Patient experience is a complex multidimensional phenomenon that has been linked to constructs that are also complex to conceptualize, such as patient-centeredness, patient expectations and patient satisfaction. The purpose of this paper is to shed light on the different dimensions of patient experience, including those that receive inadequate attention from policymakers such as the patient’s lived experience of illness and the impact of healthcare politics. The paper proposes a simple classification for these dimensions, which differentiates between two types of dimensions: the determinants and the manifestations of patient experience.
Design/methodology/approach
This paper uses a narrative review of the literature to explore select constructs and initiatives developed for theorizing or operationalizing patient experience. Literature topics reviewed include healthcare quality, medical anthropology, health policy, healthcare system and public health.
Findings
The paper identifies five determinants for patient experience: the experience of illness, patient’s subjective influences, quality of healthcare services, health system responsiveness and the politics of healthcare. The paper identifies two manifestations of patient experience: patient satisfaction and patient engagement.
Originality/value
The paper proposes a classification scheme of the dimensions of patient experience and a concept map that links together heterogeneous constructs related to patient experience. The proposed classification and the concept map provide a holistic view of patient experience and help healthcare providers, quality managers and policymakers organize and focus their healthcare quality improvement endeavors on specific dimensions of patient experience while taking into consideration the other dimensions.
Details
Keywords
The purpose of this paper is to present an objective decision-making framework and conduct a benchmarking study in the air cargo industry.
Abstract
Purpose
The purpose of this paper is to present an objective decision-making framework and conduct a benchmarking study in the air cargo industry.
Design/methodology/approach
The decision-making framework and benchmarking methodology evaluates the aircraft value for money (VfM) as a benefit-to-cost ratio calculated adopting a measure of relative efficiency. This efficiency score is measured as a comprehensive efficiency index obtained by combining several efficiency scores calculated by implementing four data envelopment analysis (DEA) models.
Findings
The framework is used to carry on a benchmarking study in the air cargo industry on a sample of 27 airplanes. The average VfM is 67.04 percent, with measurements between 39.96 and 116.03 percent. Only three airplanes achieve full VfM and behave as benchmarks to the remaining airplanes. Boeing B727-200 is a broad player in the market. Some old cargo models (DC 9-30F) deliver the same amount of VfM as more recent aircraft models (i.e. MD-11F and A300-600F).
Research limitations/implications
The decision-making framework and benchmarking methodology can usefully support managers to make sound decisions and plans. Even though DEA generates attributes weights to different alternatives that are independent of the buyer preferences, the framework flexibility allows introducing a weighting scheme to take into account the managers preferences for certain aircraft performance/functional features. It can easily include new functional/performance measurements and adapt the VfM measurement to the particular economic context, strategy, and business model of the airlines, or be transferred to different industries.
Originality/value
The framework combines technical, functional performance, and economic cost measurements to get a unique efficiency index to evaluate the airplane VfM.
Details
Keywords
Christina Dimitrantzou, Evangelos Psomas and Fotios Vouzas
The purpose of this paper is to identify the future research suggestions which have been made by several authors with regard to cost of quality (CoQ) and to group them into…
Abstract
Purpose
The purpose of this paper is to identify the future research suggestions which have been made by several authors with regard to cost of quality (CoQ) and to group them into respective themes.
Design/methodology/approach
This study was based on a systematic literature review (SLR) of 97 peer-reviewed journal articles in the field of CoQ published in well-known academic databases, such as Emerald, Elsevier, SpringerLink, Taylor & Francis, Wiley and Scopus. The time horizon for reviewing the literature was 9 years, particularly in the period between 2010 and 2018. The “Affinity diagram” was applied to group the future research suggestions into logical themes and the “Pareto diagram” to further categorize and prioritize these themes.
Findings
A plethora of future research suggestions identified in the literature are analytically presented. Moreover, the analysis showed that the future research suggestions in the field of CoQ can be grouped under eleven meaningful themes, which are further categorized into two broad categories, meaning the vital and the useful.
Research limitations/implications
This SLR was based on only fully accessed English articles published in international, peer-reviewed journals of the selected publishers. The restricted number of keywords used and the subjectivity in applying the “affinity diagram” are also limitations of this study.
Practical implications
This paper provides insights into the future research perspectives in the field of CoQ. Thus, this analysis can serve as a resource for both researchers and practitioners to further develop this area according to the future research suggestions and the respective themes revealed.
Originality/value
To the best of the authors' knowledge, this is the first SLR presenting and analyzing the future research suggestions of CoQ.
Details
Keywords
Ilija Subasic, Nebojsa Gvozdenovic and Kris Jack
The purpose of this paper is to describe a large-scale algorithm for generating a catalogue of scientific publication records (citations) from a crowd-sourced data, demonstrate…
Abstract
Purpose
The purpose of this paper is to describe a large-scale algorithm for generating a catalogue of scientific publication records (citations) from a crowd-sourced data, demonstrate how to learn an optimal combination of distance metrics for duplicate detection and introduce a parallel duplicate clustering algorithm.
Design/methodology/approach
The authors developed the algorithm and compared it with state-of-the art systems tackling the same problem. The authors used benchmark data sets (3k data points) to test the effectiveness of our algorithm and a real-life data ( > 90 million) to test the efficiency and scalability of our algorithm.
Findings
The authors show that duplicate detection can be improved by an additional step we call duplicate clustering. The authors also show how to improve the efficiency of map/reduce similarity calculation algorithm by introducing a sampling step. Finally, the authors find that the system is comparable to the state-of-the art systems for duplicate detection, and that it can scale to deal with hundreds of million data points.
Research limitations/implications
Academic researchers can use this paper to understand some of the issues of transitivity in duplicate detection, and its effects on digital catalogue generations.
Practical implications
Industry practitioners can use this paper as a use case study for generating a large-scale real-life catalogue generation system that deals with millions of records in a scalable and efficient way.
Originality/value
In contrast to other similarity calculation algorithms developed for m/r frameworks the authors present a specific variant of similarity calculation that is optimized for duplicate detection of bibliographic records by extending previously proposed e-algorithm based on inverted index creation. In addition, the authors are concerned with more than duplicate detection, and investigate how to group detected duplicates. The authors develop distinct algorithms for duplicate detection and duplicate clustering and use the canopy clustering idea for multi-pass clustering. The work extends the current state-of-the-art by including the duplicate clustering step and demonstrate new strategies for speeding up m/r similarity calculations.
Details
Keywords
Boris Popov, Suzana Varga, Dragana Jelić and Bojana Dinić
Entrepreneurial orientation (EO) at the organizational level refers to the process which includes methods, practices and decision-making styles which enhance the company’s…
Abstract
Purpose
Entrepreneurial orientation (EO) at the organizational level refers to the process which includes methods, practices and decision-making styles which enhance the company’s approaches to business. At the individual level, EO is assessed using the individual entrepreneurial orientation (IEO: Bolton and Lane, 2012) scale, comprising three dimensions: risk-taking, innovativeness and proactiveness. The purpose of this paper is to evaluate and further validate the Serbian adaptation of the IEO scale among students.
Design/methodology/approach
Two independent studies were conducted on total of 685 students from Serbia. In Study 1, participants completed the IEO scale, proactive personality scales, reinforcement sensitivity questionnaire and academic performance questionnaire. In Study 2, participants completed the IEO scale, proactive personality scales, HEXACO-60 and risky-choice decision tasks.
Findings
Results supported the three-factor structure and satisfactory reliability of the IEO scale and its subscales. Omitting one item from the innovativeness scale led to better model fit, thus resulting in a nine-item solution. Convergent validity correlations were confirmed, showing that each IEO subscale obtained the expected correlations with similar constructs.
Research limitations/implications
A potential problem with divergent validity is discussed from the aspect of the adequacy of the constructs chosen for its testing. Overall findings indicate that the Serbian adaptation of the IEO scale is a brief instrument with adequate psychometric properties, which makes it suitable for both research and practical purposes. The limitations of the study and the instrument are also highlighted and discussed.
Originality/value
The study contributes to better understanding of the nature of EO and helps its accurate assessment.
Details
Keywords
Nursuhana Alauddin, Saki Tanaka and Shu Yamada
This paper proposes a model for detecting unexpected examination scores based on past scores, current daily efforts and trend in the current score of individual students. The…
Abstract
Purpose
This paper proposes a model for detecting unexpected examination scores based on past scores, current daily efforts and trend in the current score of individual students. The detection is performed soon after the current examination is completed, which helps take immediate action to improve the ability of students before the commencement of daily assessments during the next semester.
Design/methodology/approach
The scores of past examinations and current daily assessments are analyzed using a combination of an ANOVA, a principal component analysis and a multiple regression analysis. A case study is conducted using the assessment scores of secondary-level students of an international school in Japan.
Findings
The score for the current examination is predicted based on past scores, current daily efforts and trend in the current score. A lower control limit for detecting unexpected scores is derived based on the predicted score. The actual score, which is below the lower control limit, is recognized as an unexpected score. This case study verifies the effectiveness of the combinatorial usage of data in detecting unexpected scores.
Originality/value
Unlike previous studies that utilize attribute and background data to predict student scores, this study utilizes a combination of past examination scores, current daily efforts for related subjects and trend in the current score.
Details
Keywords
Yingge Zhou, Xindong Ye and Yujiao Liu
The purpose of this study is to build a personalized learning intervention system, which can support students' personalized learning, improve teachers' teaching efficiency and…
Abstract
Purpose
The purpose of this study is to build a personalized learning intervention system, which can support students' personalized learning, improve teachers' teaching efficiency and students' learning effect.
Design/methodology/approach
The research proposes a personalized learning intervention method based on a collaborative filtering algorithm and knowledge map. The application of knowledge map makes learning content organized, and the use of collaborative filtering algorithm makes it possible to provide personalized learning recommendations for students. This personalized learning intervention system can monitor students' learning development and achieve the combination of personalized and efficiency. For the study, 152 seventh graders were assigned to a control group and an experimental group. Traditional learning intervention was used in the control group, and individualized learning intervention was used in the experimental group.
Findings
SPSS was used for data organization and analysis. The effectiveness of the personalized learning intervention system is verified by quasi-experimental research, and the influence of the system on students' learning effect is discussed. The result found that personalized learning interventions were more effective than traditional approaches in improving students’ achievement. However, for students of different learning levels, personalized learning intervention system has different effects on learning confidence and learning anxiety.
Originality/value
The personalized learning intervention system based on the collaborative filtering algorithm and knowledge map is effective in improving learning effect. And, it also has a certain influence on students' psychology.
Details
Keywords
A. Can Inci and Rachel Lagasse
This study investigates the role of cryptocurrencies in enhancing the performance of portfolios constructed from traditional asset classes. Using a long sample period covering not…
Abstract
Purpose
This study investigates the role of cryptocurrencies in enhancing the performance of portfolios constructed from traditional asset classes. Using a long sample period covering not only the large value increases but also the dramatic declines during the beginning of 2018, the purpose of this paper is to provide a more complete analysis of the dynamic nature of cryptocurrencies as individual investment opportunities, and as components of optimal portfolios.
Design/methodology/approach
The mean-variance optimization technique of Merton (1990) is applied to develop the risk and return characteristics of the efficient portfolios, along with the optimal weights of the asset class components in the portfolios.
Findings
The authors provide evidence that as a single investment, the best cryptocurrency is Ripple, followed by Bitcoin and Litecoin. Furthermore, cryptocurrencies have a useful role in the optimal portfolio construction and in investments, in addition to their original purposes for which they were created. Bitcoin is the best cryptocurrency enhancing the characteristics of the optimal portfolio. Ripple and Litecoin follow in terms of their usefulness in an optimal portfolio as single cryptocurrencies. Including all these cryptocurrencies in a portfolio generates the best (most optimal) results. Contributions of the cryptocurrencies to the optimal portfolio evolve over time. Therefore, the results and conclusions of this study have no guarantee for continuation in an exact manner in the future. However, the increasing popularity and the unique characteristics of cryptocurrencies will assist their future presence in investment portfolios.
Originality/value
This is one of the first studies that examine the role of popular cryptocurrencies in enhancing a portfolio composed of traditional asset classes. The sample period is the largest that has been used in this strand of the literature, and allows to compare optimal portfolios in early/recent subsamples, and during the pre-/post-cryptocurrency crisis periods.
Details
Keywords
Nada Smigic, Ilija Djekic, Igor Tomasevic, Nikola Stanisic, Aleksandar Nedeljkovic, Verica Lukovic and Jelena Miocinovic
The purpose of this paper is to investigate if there is a difference in hygiene parameters of raw milk produced in organic and conventional farm of similar size. In parallel, the…
Abstract
Purpose
The purpose of this paper is to investigate if there is a difference in hygiene parameters of raw milk produced in organic and conventional farm of similar size. In parallel, the aim was to determine if there are differences in pasteurized organic and conventional milk samples delivered on the market.
Design/methodology/approach
Raw milk samples were analyzed for aerobic colony count (ACC), somatic cell count (SCC), acidity, temperature, fat and protein content. On the other side, final products of organic and conventional pasteurized milk with 2.8 percent declared milk fat were analyzed for Raman spectroscopy, color change and sensorial difference.
Findings
Results of raw milk analysis showed statistically significant differences in fat content, SCC, acidity, temperature and ACC (p<0.05). It is of note that ACC for organic milk were lower for approx. 1 log CFU/ml compared to conventional milk samples. Pasteurized organic milk samples had a significantly higher L* value than those samples originating from conventional farms, indicating that organic is “more white” compared to conventional milk. According to the results of triangle test, with 95 percent confidence no more than 10 percent of the population is able to detect a difference.
Research limitations/implications
A limitation of this research is the fact that good veterinary practices at farms, namely, animal health and adequate usage of medicine for treating the animals, animal welfare and animal feeding were not analyzed.
Originality/value
This study analyzed potential differences in organic and conventional milk at two important production stages of the milk chain – at receipt at dairy plant (raw milk) and perceived by consumers (final product).