Deepak Jadhav and T.V. Ramanathan
An investor is expected to analyze the market risk while investing in equity stocks. This is because the investor has to choose a portfolio which maximizes the return with a…
Abstract
Purpose
An investor is expected to analyze the market risk while investing in equity stocks. This is because the investor has to choose a portfolio which maximizes the return with a minimum risk. The mean-variance approach by Markowitz (1952) is a dominant method of portfolio optimization, which uses variance as a risk measure. The purpose of this paper is to replace this risk measure with modified expected shortfall, defined by Jadhav et al. (2013).
Design/methodology/approach
Modified expected shortfall introduced by Jadhav et al. (2013) is found to be a coherent risk measure under univariate and multivariate elliptical distributions. This paper presents an approach of portfolio optimization based on mean-modified expected shortfall for the elliptical family of distributions.
Findings
It is proved that the modified expected shortfall of a portfolio can be represented in the form of expected return and standard deviation of the portfolio return and modified expected shortfall of standard elliptical distribution. The authors also establish that the optimum portfolio through mean-modified expected shortfall approach exists and is located within the efficient frontier of the mean-variance portfolio. The results have been empirically illustrated using returns from stocks listed in National Stock Exchange of India, Shanghai Stock Exchange of China, London Stock Exchange of the UK and New York Stock Exchange of the USA for the period February 2005-June 2018. The results are found to be consistent across all the four stock markets.
Originality/value
The mean-modified expected shortfall portfolio approach presented in this paper is new and is a natural extension of the Markowitz’s mean-variance and mean-expected shortfall portfolio optimization discussed by Deng et al. (2009).
Details
Keywords
Houmera Bibi Sabera Nunkoo, Preethee Nunkoo Gonpot, Noor-Ul-Hacq Sookia and T.V. Ramanathan
The purpose of this study is to identify appropriate autoregressive conditional duration (ACD) models that can capture the dynamics of tick-by-tick mid-cap exchange traded funds…
Abstract
Purpose
The purpose of this study is to identify appropriate autoregressive conditional duration (ACD) models that can capture the dynamics of tick-by-tick mid-cap exchange traded funds (ETFs) for the period July 2017 to December 2017 and accurately predict future trade duration values. The forecasted durations are then used to demonstrate the practical usefulness of the ACD models in quantifying an intraday time-based risk measure.
Design/methodology/approach
Through six functional forms and six error distributions, 36 ACD models are estimated for eight mid-cap ETFs. The Akaike information criterion and Bayesian information criterion and the Ljung-Box test are used to evaluate goodness-of-fit while root mean square error and the Superior predictive ability test are applied to assess forecast accuracy.
Findings
The Box-Cox ACD (BACD), augmented Box-Cox ACD (ABACD) and additive and multiplicative ACD (AMACD) extensions are among the best fits. The results obtained prove that higher degrees of flexibility do not necessarily enhance goodness of fit and forecast accuracy does not always depend on model adequacy. BACD and AMACD models based on the generalised-F distribution generate the best forecasts, irrespective of the trading frequencies of the ETFs.
Originality/value
To the best of the authors’ knowledge, this is the first study that analyses the empirical performance of ACD models for high-frequency ETF data. Additionally, in comparison to previous works, a wider range of ACD models is considered on a reasonably longer sample period. The paper will be of interest to researchers in the area of market microstructure and to practitioners engaged in high-frequency trading.
Details
Keywords
Mohamed E. Bayou and Alan Reinstein
New developments in the manufacturing industries require reexamining both the performance evaluation techniques and the concept of performance itself. As to evaluation, the return…
Abstract
New developments in the manufacturing industries require reexamining both the performance evaluation techniques and the concept of performance itself. As to evaluation, the return on investment model (ROI), popular during the 1950s throughout the 1970s, has faced much criticism that in the 1980s and the 1990s new financial and nonfinancial performance evaluation methods gained popularity. On a world‐wide basis, the increasing adoption of automation and the trend toward more centralization have changed the concept of performance. No longer based on motivating and directing the labor force, performance now aims to obtain the best results out of robotic assets and flexible manufacturing systems (FMS), which require new managerial attention and attitude. These factors have made the concept of performance more vague and difficult to define and measure. After comparing and contrasting how multinational and domestic companies evalu‐ate corporate performance, the study reconstructs the concept of performance, bringing forth fundamental propositions (axioms) of the long‐run dimension of measurement and utilization as the core of performance. Developing utilization as a dynamic concept with constantly changing components, a long‐term discounted‐cash‐flow return on investment (DCF‐ROI) model is developed and exhibited as a comprehensive measure of utilization. The DCF‐ROI model fits harmoniously into the mechanisms of the new cost reduction techniques of target and kaizen costing. Kaizen, a Japanese term, is translated into English as “improvement,” i.e., a continuous accumulation of betterment activities over the long run. Target and Kaizen costing are conditioned by top management's target profit which can take the form of a DCF‐ROI objective. This kaizen‐oriented DCF‐ROI is demonstrated as a moving average ratio that captures the dynamic utilization through its progression over ten years. Limitations of the model and recommendations for further research are also presented.
Gözde Öztürk and Abdullah Tanrisevdi
The purpose of this chapter is to shed light on researchers and practitioners about sentiment analysis in hospitality and tourism. The technical details described throughout the…
Abstract
The purpose of this chapter is to shed light on researchers and practitioners about sentiment analysis in hospitality and tourism. The technical details described throughout the chapter with a case study to provide clarifying insights. The proposed chapter adds significantly to the body of text mining knowledge by combining a technical explanation with a relevant case study. The case study used supervised machine learning to predict overall star ratings based on 20,247 comments related to Royal Caribbean International services for determining the impact of cruise travel experiences on the evaluation company process. The results indicate that travelers evaluate their travel experiences according to the most intense negative or positive feelings they have about the company.
Details
Keywords
The purpose of this paper is to compare different models’ performance in modelling and forecasting the Finnish house price returns and volatility.
Abstract
Purpose
The purpose of this paper is to compare different models’ performance in modelling and forecasting the Finnish house price returns and volatility.
Design/methodology/approach
The competing models are the autoregressive moving average (ARMA) model and autoregressive fractional integrated moving average (ARFIMA) model for house price returns. For house price volatility, the exponential generalized autoregressive conditional heteroscedasticity (EGARCH) model is competing with the fractional integrated GARCH (FIGARCH) and component GARCH (CGARCH) models.
Findings
Results reveal that, for modelling Finnish house price returns, the data set under study drives the performance of ARMA or ARFIMA model. The EGARCH model stands as the leading model for Finnish house price volatility modelling. The long memory models (ARFIMA, CGARCH and FIGARCH) provide superior out-of-sample forecasts for house price returns and volatility; they outperform their short memory counterparts in most regions. Additionally, the models’ in-sample fit performances vary from region to region, while in some areas, the models manifest a geographical pattern in their out-of-sample forecasting performances.
Research limitations/implications
The research results have vital implications, namely, portfolio allocation, investment risk assessment and decision-making.
Originality/value
To the best of the author’s knowledge, for Finland, there has yet to be empirical forecasting of either house price returns or/and volatility. Therefore, this study aims to bridge that gap by comparing different models’ performance in modelling, as well as forecasting the house price returns and volatility of the studied market.
Details
Keywords
Ching-Ho Yen, Heng Ma, Chi-Huang Yeh and Chia-Hao Chang
– The purpose of this paper is to develop an economic model, which could determine the acceptance sampling plan that minimizes the quality cost for batch manufacturing.
Abstract
Purpose
The purpose of this paper is to develop an economic model, which could determine the acceptance sampling plan that minimizes the quality cost for batch manufacturing.
Design/methodology/approach
The authors propose a variable sampling plan based on one-sided capability indices for dealing with the quality cost requirement.
Findings
The total quality cost is much more sensitive to process capability indices and inspected cost than internal and external failure costs.
Research limitations/implications
The experimental data were randomly generated instead of real world ones.
Practical implications
The proposed model is specifically designed for manufacturing industries with high sampling cost.
Originality/value
The one-sided capability indices were utilized for the first time to be suitable for the purpose.
Details
Keywords
Josephine Dufitinema and Seppo Pynnönen
The purpose of this paper is to examine the evidence of long-range dependence behaviour in both house price returns and volatility for fifteen main regions in Finland over the…
Abstract
Purpose
The purpose of this paper is to examine the evidence of long-range dependence behaviour in both house price returns and volatility for fifteen main regions in Finland over the period of 1988:Q1 to 2018:Q4. These regions are divided geographically into 45 cities and sub-areas according to their postcode numbers. The studied type of dwellings is apartments (block of flats) divided into one-room, two-rooms, and more than three rooms apartments types.
Design/methodology/approach
For each house price return series, both parametric and semiparametric long memory approaches are used to estimate the fractional differencing parameter d in an autoregressive fractional integrated moving average [ARFIMA (p, d, q)] process. Moreover, for cities and sub-areas with significant clustering effects (autoregressive conditional heteroscedasticity [ARCH] effects), the semiparametric long memory method is used to analyse the degree of persistence in the volatility by estimating the fractional differencing parameter d in both squared and absolute price returns.
Findings
A higher degree of predictability was found in all three apartments types price returns with the estimates of the long memory parameter constrained in the stationary and invertible interval, implying that the returns of the studied types of dwellings are long-term dependent. This high level of persistence in the house price indices differs from other assets, such as stocks and commodities. Furthermore, the evidence of long-range dependence was discovered in the house price volatility with more than half of the studied samples exhibiting long memory behaviour.
Research limitations/implications
Investigating the long memory behaviour in both returns and volatility of the house prices is crucial for investment, risk and portfolio management. One reason is that the evidence of long-range dependence in the housing market returns suggests a high degree of predictability of the asset. The other reason is that the presence of long memory in the housing market volatility aids in the development of appropriate time series volatility forecasting models in this market. The study outcomes will be used in modelling and forecasting the volatility dynamics of the studied types of dwellings. The quality of the data limits the analysis and the results of the study.
Originality/value
To the best of the authors’ knowledge, this is the first research that assesses the long memory behaviour in the Finnish housing market. Also, it is the first study that evaluates the volatility of the Finnish housing market using data on both municipal and geographical level.
Details
Keywords
The purpose of this paper is to report on the incidence of the choice between full‐cost and variable‐cost pricing, and to examine the factors that could possibly influence this…
Abstract
The purpose of this paper is to report on the incidence of the choice between full‐cost and variable‐cost pricing, and to examine the factors that could possibly influence this choice. The findings indicate that whereas 74,5% of the firms use full cost for pricing their products, only 25,5% use variable costs. The research provides evidence that supports the size of the company, product type, stage in product lifecycle, materiality of fixed overhead costs and the objectives of the company as significant variables influencing the choice of the cost base for product pricing.
Details
Keywords
Jo Ann M. Duffy, James A. Fitzsimmons and Nikhil Jain
One of the fastest growing service industries is long‐term care. Identifying the best performers in the industry in terms of service productivity is difficult because there is no…
Abstract
Purpose
One of the fastest growing service industries is long‐term care. Identifying the best performers in the industry in terms of service productivity is difficult because there is no single summary measure of outcomes, particularly quality outcomes. The purpose of the paper is to show the potential of data envelopment analysis (DEA) as a benchmarking method in long‐term care.
Design/methodology/approach
The paper provides background information on the long‐term care industry and describes the DEA methodology and applications to long‐term care. Data originated from two data sources with four databases furnishing information on 69 long‐term care facilities used.
Findings
In the hypotheses tested it was found that most of the models showed that for profit nursing homes were significantly more efficient than nonprofit. The exception was in the model that included the condition of patients as a co‐production input and then there was no significant difference in efficient performance between ownership types.
Originality/value
The paper shows the value of DEA as a method of benchmarking in the context of long‐term care.