Search results

41 – 50 of over 53000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Book part
Publication date: 11 December 2006

Mo Chaudhury

This paper provides a fuller characterization of the analytical upper bounds for American options than has been available to date. We establish properties required of analytical…

Abstract

This paper provides a fuller characterization of the analytical upper bounds for American options than has been available to date. We establish properties required of analytical upper bounds without any direct reliance on the exercise boundary. A class of generalized European claims on the same underlying asset is then proposed as upper bounds. This set contains the existing closed form bounds of Margrabe, (1978) and Chen and Yeh (2002) as special cases and allows randomization of the maturity payoff. Owing to the European nature of the bounds, across-strike arbitrage conditions on option prices seem to carry over to the bounds. Among other things, European option spreads may be viewed as ratio positions on the early exercise option. To tighten the upper bound, we propose a quasi-bound that holds as an upper bound for most situations of interest and seems to offer considerable improvement over the currently available closed form bounds. As an approximation, the discounted value of Chen and Yeh's (2002) bound holds some promise. We also discuss implications for parametric and nonparametric empirical option pricing. Sample option quotes for the European (XEO) and the American (OEX) options on the S&P 100 Index appear well behaved with respect to the upper bound properties but the bid–ask spreads are too wide to permit a synthetic short position in the early exercise option.

Details

Research in Finance
Type: Book
ISBN: 978-1-84950-441-6

Access Restricted. View access options
Article
Publication date: 1 October 1996

John L. Kent

Presents a conceptual framework and a set of research hypotheses that are intended to help explain the interfunctional co‐ordination between the logistics and information…

1555

Abstract

Presents a conceptual framework and a set of research hypotheses that are intended to help explain the interfunctional co‐ordination between the logistics and information technology functions. Much has been written over the past decade regarding the strategic potential of the logistics and information technology functions for creating customer value, process efficiencies, and differential advantage for the firm. Additionally, the interrelationships that exist within business organizations have received considerable discussion within the literature. However, little attention has been paid to the co‐ordination of the logistics and information technology functions. The framework presented is based on a combined review of the logistics, information technology, and interfunctional co‐ordination literature. The constructs of interaction and collaboration are utilized to explain how differing levels of interfunctional co‐ordination affect the firm’s logistics information system. Initial support for the conceptual framework is provided by qualitative research. Finally, research results and concluding comments on implications for practitioners and future research are discussed.

Details

International Journal of Physical Distribution & Logistics Management, vol. 26 no. 8
Type: Research Article
ISSN: 0960-0035

Keywords

Access Restricted. View access options
Article
Publication date: 22 November 2010

Yun‐Sheng Chung, D. Frank Hsu, Chun‐Yi Liu and Chun‐Yi Tang

Multiple classifier systems have been used widely in computing, communications, and informatics. Combining multiple classifier systems (MCS) has been shown to outperform a single…

548

Abstract

Purpose

Multiple classifier systems have been used widely in computing, communications, and informatics. Combining multiple classifier systems (MCS) has been shown to outperform a single classifier system. It has been demonstrated that improvement in ensemble performance depends on either the diversity among or the performance of individual systems. A variety of diversity measures and ensemble methods have been proposed and studied. However, it remains a challenging problem to estimate the ensemble performance in terms of the performance of and the diversity among individual systems. The purpose of this paper is to study the general problem of estimating ensemble performance for various combination methods using the concept of a performance distribution pattern (PDP).

Design/methodology/approach

In particular, the paper establishes upper and lower bounds for majority voting ensemble performance with disagreement diversity measure Dis, weighted majority voting performance in terms of weighted average performance and weighted disagreement diversity, and plurality voting ensemble performance with entropy diversity measure D.

Findings

Bounds for these three cases are shown to be tight using the PDP for the input set.

Originality/value

As a consequence of the authors' previous results on diversity equivalence, the results of majority voting ensemble performance can be extended to several other diversity measures. Moreover, the paper showed in the case of majority voting ensemble performance that when the average of individual systems performance P is big enough, the ensemble performance Pm resulting from a maximum (information‐theoretic) entropy PDP is an increasing function with respect to the disagreement diversity Dis. Eight experiments using data sets from various application domains are conducted to demonstrate the complexity, richness, and diverseness of the problem in estimating the ensemble performance.

Details

International Journal of Pervasive Computing and Communications, vol. 6 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Access Restricted. View access options
Article
Publication date: 5 June 2009

Boris Mitavskiy, Jonathan Rowe and Chris Cannings

A variety of phenomena such as world wide web, social or business networks, interactions are modelled by various kinds of networks (such as the scale free or preferential…

842

Abstract

Purpose

A variety of phenomena such as world wide web, social or business networks, interactions are modelled by various kinds of networks (such as the scale free or preferential attachment networks). However, due to the model‐specific requirements one may want to rewire the network to optimize the communication among the various nodes while not overloading the number of channels (i.e. preserving the number of edges). The purpose of this paper is to present a formal framework for this problem and to examine a family of local search strategies to cope with it.

Design/methodology/approach

This is mostly theoretical work. The authors use rigorous mathematical framework to set‐up the model and then we prove some interesting theorems about it which pertain to various local search algorithms that work by rerouting the network.

Findings

This paper proves that in cases when every pair of nodes is sampled with non‐zero probability then the algorithm is ergodic in the sense that it samples every possible network on the specified set of nodes and having a specified number of edges with nonzero probability. Incidentally, the ergodicity result led to the construction of a class of algorithms for sampling graphs with a specified number of edges over a specified set of nodes uniformly at random and opened some other challenging and important questions for future considerations.

Originality/value

The measure‐theoretic framework presented in the current paper is original and rather general. It allows one to obtain new points of view on the problem.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Access Restricted. View access options
Book part
Publication date: 10 November 2014

Maria Bampasidou, Carlos A. Flores, Alfonso Flores-Lagunes and Daniel J. Parisian

Job Corps is the United State’s largest and most comprehensive training program for disadvantaged youth aged 16–24 years old. A randomized social experiment concluded that, on…

Abstract

Job Corps is the United State’s largest and most comprehensive training program for disadvantaged youth aged 16–24 years old. A randomized social experiment concluded that, on average, individuals benefited from the program in the form of higher weekly earnings and employment prospects. At the same time, “young adults” (ages 20–24) realized much higher impacts relative to “adolescents” (ages 16–19). Employing recent nonparametric bounds for causal mediation, we investigate whether these two groups’ disparate effects correspond to them benefiting differentially from distinct aspects of Job Corps, with a particular focus on the attainment of a degree (GED, high school, or vocational). We find that, for young adults, the part of the total effect of Job Corps on earnings (employment) that is due to attaining a degree within the program is at most 41% (32%) of the total effect, whereas for adolescents that part can account for up to 87% (100%) of the total effect. We also find evidence that the magnitude of the part of the effect of Job Corps on the outcomes that works through components of Job Corps other than degree attainment (e.g., social skills, job placement, residential services) is likely higher for young adults than for adolescents. That those other components likely play a more important role for young adults has policy implications for more effectively servicing participants. More generally, our results illustrate how researchers can learn about particular mechanisms of an intervention.

Details

Factors Affecting Worker Well-being: The Impact of Change in the Labor Market
Type: Book
ISBN: 978-1-78441-150-3

Keywords

Access Restricted. View access options
Article
Publication date: 6 August 2020

Mukesh Kumar, Joginder Singh, Sunil Kumar and Aakansha

The purpose of this paper is to design and analyze a robust numerical method for a coupled system of singularly perturbed parabolic delay partial differential equations (PDEs).

196

Abstract

Purpose

The purpose of this paper is to design and analyze a robust numerical method for a coupled system of singularly perturbed parabolic delay partial differential equations (PDEs).

Design/methodology/approach

Some a priori bounds on the regular and layer parts of the solution and their derivatives are derived. Based on these a priori bounds, appropriate layer adapted meshes of Shishkin and generalized Shishkin types are defined in the spatial direction. After that, the problem is discretized using an implicit Euler scheme on a uniform mesh in the time direction and the central difference scheme on layer adapted meshes of Shishkin and generalized Shishkin types in the spatial direction.

Findings

The method is proved to be robust convergent of almost second-order in space and first-order in time. Numerical results are presented to support the theoretical error bounds.

Originality/value

A coupled system of singularly perturbed parabolic delay PDEs is considered and some a priori bounds are derived. A numerical method is developed for the problem, where appropriate layer adapted Shishkin and generalized Shishkin meshes are considered. Error analysis of the method is given for both Shishkin and generalized Shishkin meshes.

Details

Engineering Computations, vol. 38 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 22 February 2024

Zoubida Chorfi

As supply chain excellence matters, designing an appropriate health-care supply chain is a great consideration to the health-care providers worldwide. Therefore, the purpose of…

73

Abstract

Purpose

As supply chain excellence matters, designing an appropriate health-care supply chain is a great consideration to the health-care providers worldwide. Therefore, the purpose of this paper is to benchmark several potential health-care supply chains to design an efficient and effective one in the presence of mixed data.

Design/methodology/approach

To achieve this objective, this research illustrates a hybrid algorithm based on data envelopment analysis (DEA) and goal programming (GP) for designing real-world health-care supply chains with mixed data. A DEA model along with a data aggregation is suggested to evaluate the performance of several potential configurations of the health-care supply chains. As part of the proposed approach, a GP model is conducted for dimensioning the supply chains under assessment by finding the level of the original variables (inputs and outputs) that characterize these supply chains.

Findings

This paper presents an algorithm for modeling health-care supply chains exclusively designed to handle crisp and interval data simultaneously.

Research limitations/implications

The outcome of this study will assist the health-care decision-makers in comparing their supply chains against peers and dimensioning their resources to achieve a given level of productions.

Practical implications

A real application to design a real-life pharmaceutical supply chain for the public ministry of health in Morocco is given to support the usefulness of the proposed algorithm.

Originality/value

The novelty of this paper comes from the development of a hybrid approach based on DEA and GP to design an appropriate real-life health-care supply chain in the presence of mixed data. This approach definitely contributes to assist health-care decision-makers design an efficient and effective supply chain in today’s competitive word.

Access Restricted. View access options
Article
Publication date: 14 June 2019

Pingping Xiong, Zhiqing He, Shiting Chen and Mao Peng

In recent years, domestic smog has become increasingly frequent and the adverse effects of smog have increasingly become the focus of public attention. It is a way to analyze such…

154

Abstract

Purpose

In recent years, domestic smog has become increasingly frequent and the adverse effects of smog have increasingly become the focus of public attention. It is a way to analyze such problems and provide solutions by mathematical methods.

Design/methodology/approach

This paper establishes a new gray model (GM) (1,N) prediction model based on the new kernel and degree of grayness sequences under the case that the interval gray number distribution information is known. First, the new kernel and degree of grayness sequences of the interval gray number sequence are calculated using the reconstruction definition of the kernel and degree of grayness. Then, the GM(1,N) model is formed based on the above new sequences to simulate and predict the kernel and degree of the grayness of the interval gray number sequence. Finally, the upper and lower bounds of the interval gray number are deduced based on the calculation formulas of the kernel and degree of grayness.

Findings

To verify further the practical significance of the model proposed in this paper, the authors apply the model to the simulation and prediction of smog. Compared with the traditional GM(1,N) model, the new GM(1,N) prediction model established in this paper has better prediction effect and accuracy.

Originality/value

This paper improves the traditional GM(1,N) prediction model and establishes a new GM(1,N) prediction model in the case of the known distribution information of the interval gray number of the smog pollutants concentrations data.

Details

Kybernetes, vol. 49 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 4 May 2021

Sandang Guo, Yaqian Jing and Bingjun Li

The purpose of this paper is to make multivariable gray model to be available for the application on interval gray number sequences directly, the matrix form of interval…

184

Abstract

Purpose

The purpose of this paper is to make multivariable gray model to be available for the application on interval gray number sequences directly, the matrix form of interval multivariable gray model (IMGM(1,m,k) model) is constructed to simulate and forecast original interval gray number sequences in this paper.

Design/methodology/approach

Firstly, the interval gray number is regarded as a three-dimensional column vector, and the parameters of multivariable gray model are expressed in matrix form. Based on the dynamic gray action and optimized background value, the interval multivariable gray model is constructed. Finally, two examples and comparisons are carried out to verify the effectiveness of IMGM(1,m,k) model.

Findings

The model is applied to simulate and predict expert value, foreign direct investment, automobile sales and steel output, respectively. The results show that the proposed model has better simulation and prediction performance than another two models.

Practical implications

Due to the uncertainty information and continuous changing of reality, the interval gray numbers are used to characterize full information of original data. And the IMGM(1,m,k) model not only considers the characteristics of parameters changing with time but also takes into account information on lower, middle and upper bounds of interval gray numbers simultaneously to make better suitable for practical application.

Originality/value

The main contribution of this paper is to propose a new interval multivariable gray model, which considers the interaction between the lower, middle and upper bounds of interval numbers and need not to transform interval gray number sequences into real sequences. According to combining different characteristics of each bound of interval gray numbers, the matrix form of interval multivariable gray model is established to simulate and forecast interval gray numbers. In addition, the model introduces dynamic gray action to reflect the changes of parameters over time. Instead of white equation of classic MGM(1,m), the difference equation is directly used to solve the simulated and predicted values.

Details

Grey Systems: Theory and Application, vol. 12 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Access Restricted. View access options
Article
Publication date: 18 October 2011

Minghu Ha, Jiqiang Chen, Witold Pedrycz and Lu Sun

Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The…

192

Abstract

Purpose

Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The constructive distribution‐independent bounds on generalization are the cornerstone of constructing support vector machines. Random sets and set‐valued probability are important extensions of random variables and probability, respectively. The paper aims to address these issues.

Design/methodology/approach

In this study, the bounds on the rate of convergence of learning processes based on random sets and set‐valued probability are discussed. First, the Hoeffding inequality is enhanced based on random sets, and then making use of the key theorem the non‐constructive distribution‐dependent bounds of learning machines based on random sets in set‐valued probability space are revisited. Second, some properties of random sets and set‐valued probability are discussed.

Findings

In the sequel, the concepts of the annealed entropy, the growth function, and VC dimension of a set of random sets are presented. Finally, the paper establishes the VC dimension theory of SLT based on random sets and set‐valued probability, and then develops the constructive distribution‐independent bounds on the rate of uniform convergence of learning processes. It shows that such bounds are important to the analysis of the generalization abilities of learning machines.

Originality/value

SLT is considered at present as one of the fundamental theories about small statistical learning.

41 – 50 of over 53000
Per page
102050