Search results
1 – 6 of 6This paper aims to examine the integration of housing markets in Canada by examining housing price data (1999–2016) of six metropolitan areas in different provinces, namely…
Abstract
Purpose
This paper aims to examine the integration of housing markets in Canada by examining housing price data (1999–2016) of six metropolitan areas in different provinces, namely, Calgary, Vancouver, Winnipeg, Toronto, Montreal and Halifax. The authors test for cointegration, driver cities of long-run relationships, long-run Granger causality and instantaneous causality in light of the global financial crisis (GFC) (2007–2008).
Design/methodology/approach
The authors use Johansen’s system cointegration approach with structural breaks. Moving average representation is used for common stochastic trend(s) analysis. Finally, the authors apply vector error correction model-based Granger causality and instantaneous causality.
Findings
Cities’ housing prices are in long-run equilibrium. Post-crisis Canadian housing markets became more integrated. The Calgary, Vancouver, Toronto and Montreal markets drive the Canadian housing market, leading all cities toward long-run equilibrium. Strong long-run Granger causality exists, but the authors observe no instantaneous causality. Price information takes time to disseminate, and long-run price adjustments play a significant role in causation.
Practical implications
The findings of cointegration increasing after the GFC and strong lead–lag can be used by investors to arbitrage and optimize portfolios. This can also help national and local policymakers in mitigating risk. Incorporating these findings can lead to better price forecasting.
Originality/value
This study presents many novelties for the Canadian housing market: it is the first to use repeat-sales regional pricing indices to test long-run behaviors, conduct common stochastic trend analyzes and present causality relations.
Details
Keywords
Joseph J. French and Vijay Kumar Vishwakarma
The purpose of this paper is to dissect the dynamic linkages between foreign equity flows, exchange rates and equity returns in the Philippines.
Abstract
Purpose
The purpose of this paper is to dissect the dynamic linkages between foreign equity flows, exchange rates and equity returns in the Philippines.
Design/methodology/approach
Using a parsimonious SVARX‐GARCH model and unique daily equity flow data, this research models the relationship between net equity flows, conditional variance of stock returns and conditional variance of exchange rates.
Findings
The authors find several noteworthy results, which are unique to this study and several results that confirm existing literature. Much of existing literature on foreign equity flows into emerging economies find that foreign equity investors are trend chasers and equity flows are auto correlated. The authors confirm these finding in the Philippines and document two new and important findings. First, it was found that unexpected increases in foreign equity flows to the Philippines increases the conditional volatility of the Filipino stock market significantly over the next two weeks of trading. The second major finding is that unexpected shocks to foreign equity flows sharply increases the conditional variance of the USD/PHP exchange rate over the next two to three weeks of trading.
Practical implications
Taken together, the results indicate that foreign equity investment, while providing many benefits for small open economies such as the Philippines, does in the short run increase the conditional variance of both the equity market and exchange rates. Policy makers must weigh the benefits of increased risk sharing and the potential for lower costs of capital with the short‐run potential for increase swings in asset prices.
Originality/value
This paper is one of the only studies of its kind to test the impact of foreign equity flows on the conditional volatility of returns and exchange rates.
Details
Keywords
This paper aims to examine the risk premium for investors in a changing information environment in the Taiwan, New York and London real estate markets from March 2006 to November…
Abstract
Purpose
This paper aims to examine the risk premium for investors in a changing information environment in the Taiwan, New York and London real estate markets from March 2006 to November 2014. This study attempts to quantify behavioral expectations regarding (or motivation for) investment in the Taiwanese real estate in a changing information environment.
Design/methodology/approach
This paper uses the rolling generalised autoregressive conditionally heteroskedastic in mean (GARCH-M) methodology which fixes the problem of conventional GARCH-M methodology.
Findings
Empirical evidence suggests that the time-varying risk premium changed for the Taiwan real estate market with a new information set. The risk premium changed from 1.305 per cent per month to −7.232 per cent per month. The study also found persistent volatility shocks from March 2006 to November 2014. No such evidence was found for the New York and London real estate markets. Overall, this study finds evidence of a time-varying risk premium, partly explainable by governmental policies and partly unexplainable.
Research limitations/implications
The use of the index of Standard and Poor’s Taiwan Real Estate Investment Trusts to study the Taiwan real estate industry may have aggregation effects in result.
Practical implications
The present study will provide guidance to investors as well as policymakers regarding the Taiwan real estate market.
Originality/value
This study uses the rolling GARCH-M model, which is a first for the Taiwan real estate market.
Details
Keywords
Manoj Kumar Mahawar, Kirti Jalgaonkar, Bhushan Bibwe, Tushar Kulkarni, Bharat Bhushan and Vijay Singh Meena
This paper aims to optimize the quantum of aonla pulp that could be mixed with guava pulp to make a nutritional rich fruit bar. The developed fruit bar will not only help in the…
Abstract
Purpose
This paper aims to optimize the quantum of aonla pulp that could be mixed with guava pulp to make a nutritional rich fruit bar. The developed fruit bar will not only help in the improvement of processing value of both Guava and underused but highly nutritional Aonla but also serve the purpose of improvement in nutritional status of consumers.
Design/methodology/approach
Response surface methodology (RSM) using Box–Behnken design was used with the process variables as aonla and guava pulp ratio, PR (30:70, 40:60, 50:50); pectin concentration, PC (0, 0.15, 0.30%); and drying temperature, DT (50, 60, 70°C) for optimization of process conditions. The prepared mixed fruit leather was evaluated for physico-chemical, textural and sensory properties such as titratable acidity (TA), ascorbic acid content (AA), L value (lightness), cutting force (CF), taste and overall acceptability (OAA).
Findings
Second-order regression models fitted for TA, AA, L value (lightness), CF, taste and OAA were highly significant (P = 0.01) with the coefficient of determination (R2 = 0.85). The TA and AA of mixed fruit bar increased whereas L value, CF, taste and OAA decreased with increasing level of aonla pulp in the blend formulation. The optimum process conditions for mixed aonla-guava bar with desirable characteristics were 40:60 (PR), 0.02% (PC) and 56°C (DT). The corresponding optimum values of TA, AA, L value, CF, taste and OAA were 1.00%, 164 mg/100 g, 50, 5066 g, 7.83 and 7.92, respectively. The design formulation and data analysis using RSM validated the optimum solution.
Originality/value
This paper demonstrates that optimum blending of aonla and guava pulp has improved the overall nutritional characteristics and acceptability of the final product. This will not only help in reducing the associated post-harvest losses but also encourage the cultivators/local processing industries by stabilizing the price during glut sea.
Details
Keywords
Veerraju Gampala, Praful Vijay Nandankar, M. Kathiravan, S. Karunakaran, Arun Reddy Nalla and Ranjith Reddy Gaddam
The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open…
Abstract
Purpose
The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open research COVID-19 data set. As COVID-19 has an up-to-date data collection from the government, deep learning techniques can be used to predict future outbreak of coronavirus. The existing long short-term memory (LSTM) model is fine-tuned to forecast the outbreak of COVID-19 with better accuracy, and an empirical data exploration with advanced picturing has been made to comprehend the outbreak of coronavirus.
Design/methodology/approach
This research work presents a fine-tuned LSTM deep learning model using three hidden layers, 200 LSTM unit cells, one activation function ReLu, Adam optimizer, loss function is mean square error, the number of epochs 200 and finally one dense layer to predict one value each time.
Findings
LSTM is found to be more effective in forecasting future predictions. Hence, fine-tuned LSTM model predicts accurate results when applied to COVID-19 data set.
Originality/value
The fine-tuned LSTM model is developed and tested for the first time on COVID-19 data set to forecast outbreak of pandemic according to the authors’ knowledge.
Details
Keywords
Rumi Iqbal Doewes, Rajit Nair and Tripti Sharma
This purpose of this study is to perfrom the analysis of COVID-19 with the help of blood samples. The blood samples used in the study consist of more than 100 features. So to…
Abstract
Purpose
This purpose of this study is to perfrom the analysis of COVID-19 with the help of blood samples. The blood samples used in the study consist of more than 100 features. So to process high dimensional data, feature reduction has been performed by using the genetic algorithm.
Design/methodology/approach
In this study, the authors will implement the genetic algorithm for the prediction of COVID-19 from the blood test sample. The sample contains records of around 5,644 patients with 111 attributes. The genetic algorithm such as relief with ant colony optimization algorithm will be used for dimensionality reduction approach.
Findings
The implementation of this study is done through python programming language and the performance evaluation of the model is done through various parameters such as accuracy, sensitivity, specificity and area under curve (AUC).
Originality/value
The implemented model has achieved an accuracy of 98.7%, sensitivity of 96.76%, specificity of 98.80% and AUC of 92%. The results have shown that the implemented algorithm has performed better than other states of the art algorithms.
Details