To study the effect of Knightian uncertainty – as opposed to statistical estimation error – in the evaluation of value‐at‐risk (VaR) of financial investments. To develop methods…
Abstract
Purpose
To study the effect of Knightian uncertainty – as opposed to statistical estimation error – in the evaluation of value‐at‐risk (VaR) of financial investments. To develop methods for augmenting existing VaR estimates to account for Knightian uncertainty.
Design/methodology/approach
The value at risk of a financial investment is assessed as the quantile of an estimated probability distribution of the returns. Estimating a VaR from historical data entails two distinct sorts of uncertainty: probabilistic uncertainty in the estimation of a probability density function (PDF) from historical data, and non‐probabilistic Knightian info‐gaps in the future size and shape of the lower tail of the PDF. A PDF is estimated from historical data, while a VaR is used to predict future risk. Knightian uncertainty arises from the structural changes, surprises, etc., which occur in the future and therefore are not manifested in historical data. This paper concentrates entirely on Knightian uncertainty and does not consider the statistical problem of estimating a PDF. Info‐gap decision theory is used to study the robustness of a VaR to Knightian uncertainty in the distribution.
Findings
It is shown that VaRs, based on estimated PDFs, have no robustness to Knightian errors in the PDF. An info‐gap safety factor is derived that multiplies the estimated VaR in order to obtain a revised VaR with specified robustness to Knightian error in the PDF. A robustness premium is defined as a supplement to the incremental VaR for comparing portfolios.
Practical implications
The revised VaR and incremental VaR augment existing tools for evaluating financial risk.
Originality/value
Info‐gap theory, which underlies this paper, is a non‐probabilistic quantification of uncertainty that is very suitable for representing Knightian uncertainty. This enables one to assess the robustness to future surprises, as distinct from existing statistical techniques for assessing estimation error resulting from randomness of historical data.
Details
Keywords
Liwei Ju, Zhe Yin, Qingqing Zhou, Li Liu, Yushu Pan and Zhongfu Tan
This study aims to form a new concept of power-to-gas-based virtual power plant (GVPP) and propose a low-carbon economic scheduling optimization model for GVPP considering carbon…
Abstract
Purpose
This study aims to form a new concept of power-to-gas-based virtual power plant (GVPP) and propose a low-carbon economic scheduling optimization model for GVPP considering carbon emission trading.
Design/methodology/approach
In view of the strong uncertainty of wind power and photovoltaic power generation in GVPP, the information gap decision theory (IGDT) is used to measure the uncertainty tolerance threshold under different expected target deviations of the decision-makers. To verify the feasibility and effectiveness of the proposed model, nine-node energy hub was selected as the simulation system.
Findings
GVPP can coordinate and optimize the output of electricity-to-gas and gas turbines according to the difference in gas and electricity prices in the electricity market and the natural gas market at different times. The IGDT method can be used to describe the impact of wind and solar uncertainty in GVPP. Carbon emission rights trading can increase the operating space of power to gas (P2G) and reduce the operating cost of GVPP.
Research limitations/implications
This study considers the electrical conversion and spatio-temporal calming characteristics of P2G, integrates it with VPP into GVPP and uses the IGDT method to describe the impact of wind and solar uncertainty and then proposes a GVPP near-zero carbon random scheduling optimization model based on IGDT.
Originality/value
This study designed a novel structure of the GVPP integrating P2G, gas storage device into the VPP and proposed a basic near-zero carbon scheduling optimization model for GVPP under the optimization goal of minimizing operating costs. At last, this study constructed a stochastic scheduling optimization model for GVPP.
Details
Keywords
Garrison Stevens, Kendra Van Buren, Elizabeth Wheeler and Sez Atamturktur
Numerical models are being increasingly relied upon to evaluate wind turbine performance by simulating phenomena that are infeasible to measure experimentally. These numerical…
Abstract
Purpose
Numerical models are being increasingly relied upon to evaluate wind turbine performance by simulating phenomena that are infeasible to measure experimentally. These numerical models, however, require a large number of input parameters that often need to be calibrated against available experiments. Owing to the unavoidable scarcity of experiments and inherent uncertainties in measurements, this calibration process may yield non-unique solutions, i.e. multiple sets of parameters may reproduce the available experiments with similar fidelity. The purpose of this paper is to study the trade-off between fidelity to measurements and the robustness of this fidelity to uncertainty in calibrated input parameters.
Design/methodology/approach
Here, fidelity is defined as the ability of the model to reproduce measurements and robustness is defined as the allowable variation in the input parameters with which the model maintains a predefined level of threshold fidelity. These two vital attributes of model predictiveness are evaluated in the development of a simplified finite element beam model of the CX-100 wind turbine blade.
Findings
Findings of this study show that calibrating the input parameters of a numerical model with the sole objective of improving fidelity to available measurements degrades the robustness of model predictions at both tested and untested settings. A more optimal model may be obtained by calibration methods considering both fidelity and robustness. Multi-criteria Decision Making further confirms the conclusion that the optimal model performance is achieved by maintaining a balance between fidelity and robustness during calibration.
Originality/value
Current methods for model calibration focus solely on fidelity while the authors focus on the trade-off between fidelity and robustness.
Details
Keywords
The purpose of this paper is to clarify a number of important facts about info‐gap decision theory.
Abstract
Purpose
The purpose of this paper is to clarify a number of important facts about info‐gap decision theory.
Design/methodology/approach
Theorems are put forward to rebut claims made about info‐gap decision theory in papers published in this journal and elsewhere.
Findings
Info‐gap's robustness model is a simple instance of the most famous model in classical decision theory for the treatment of decision problems subject to severe uncertainty, namely Wald's maximin model. This simple instance is the equivalent of the well‐established model known universally as radius of stability. Info‐gap's robustness model has an inherent local orientation. Therefore, it is in principle unable to address the fundamental difficulties presented by the type of severe uncertainty that is postulated by info‐gap decision theory.
Practical implications
These findings caution against accepting the assertions made in the info‐gap literature about: info‐gap decision theory's role and place in decision making under severe uncertainty; and its ability to model, analyze, and manage severe uncertainty.
Originality/value
This paper exposes the serious difficulties with claims made in papers published in this journal and elsewhere regarding the place and role of info‐gap decision theory in decision theory and its ability to handle severe uncertainty.