Search results

1 – 10 of 16
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 1 January 1986

N.A. KATSAKOS‐MAVROMICHALIS, M.A. TZANNES and N.S. TZANNES

Four entropy methods (MESA, SMESA, MCESA and SMCESA) are reviewed and then used in the problem of resolving two sinusoids in the presence of additive white and 1/f noise. SMCESA…

52

Abstract

Four entropy methods (MESA, SMESA, MCESA and SMCESA) are reviewed and then used in the problem of resolving two sinusoids in the presence of additive white and 1/f noise. SMCESA appears to have the overall edge in this study. The frequency resolution problem is, of course, an example in communications, radar, etc.

Details

Kybernetes, vol. 15 no. 1
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 1 February 1991

Rallis C. Papademetriou

The possibility of improving the quality of the minimum relative entropy spectral estimates by properly selecting the set of autocorrelation values is demonstrated. The study…

40

Abstract

The possibility of improving the quality of the minimum relative entropy spectral estimates by properly selecting the set of autocorrelation values is demonstrated. The study concentrates on two aspects: resolvability and accuracy of peak location. Several numerical examples are given.

Details

Kybernetes, vol. 20 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 1 April 1998

Rallis C. Papademetriou

This paper presents an overview of three information‐theoretic methods, which have been used extensively in many areas such as signal/image processing, pattern recognition and…

757

Abstract

This paper presents an overview of three information‐theoretic methods, which have been used extensively in many areas such as signal/image processing, pattern recognition and statistical inference. These are: the maximum entropy (ME), minimum cross‐entropy (MCE) and mutual information (MI) methods. The development history of these techniques is reviewed, their essential philosophy is explained, and typical applications, supported by simulation results, are discussed.

Details

Kybernetes, vol. 27 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 20 February 2007

J.P. Noonan and Prabahan Basu

In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem…

290

Abstract

Purpose

In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem, the available prior information is used to define an appropriate probability model for the system uncertainty through a probability density function. When the prior information is available as a finite sequence of moments of the unknown probability density function (PDF) defining the appropriate probability model for the uncertain system, the maximum entropy (ME) method derives a PDF from an exponential family to define an approximate model. This paper, aims to investigate some optimality properties of the ME estimates.

Design/methodology/approach

For n>m, when the exact model can be best approximated by one of an infinite number of unknown PDFs from an n parameter exponential family. The upper bound of the divergence distance between any PDF from this family and the m parameter exponential family PDF defined by the ME method are derived. A measure of adequacy of the model defined by ME method is thus provided.

Findings

These results may be used to establish confidence intervals on the estimate of a function of the random variable when the ME approach is employed. Additionally, it is shown that when working with large samples of independent observations, a probability density function (PDF) can be defined from an exponential family to model the uncertainty of the underlying system with measurable accuracy. Finally, a relationship with maximum likelihood estimation for this case is established.

Practical implications

The so‐called known moments problem addressed in this paper has a variety of applications in learning, blind equalization and neural networks.

Originality/value

An upper bound for error in approximating an unknown density function, f(x) by its ME estimate based on m moment constraints, obtained as a PDF p(x, α) from an m parameter exponential family is derived. The error bound will help us decide if the number of moment constraints is adequate for modeling the uncertainty in the system under study. In turn, this allows one to establish confidence intervals on an estimate of some function of the random variable, X, given the known moments. It is also shown how, when working with a large sample of independent observations, instead of precisely known moment constraints, a density from an exponential family to model the uncertainty of the underlying system with measurable accuracy can be defined. In this case, a relationship to ML estimation is established.

Details

Kybernetes, vol. 36 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 1 February 1981

N.S. TZANNES and T.G. AVGERIS

In the first part of this paper a new method of applying the Maximum Entropy Principle (MEP) is presented, which makes use of a “frequency related” entropy, and which is valid for…

37

Abstract

In the first part of this paper a new method of applying the Maximum Entropy Principle (MEP) is presented, which makes use of a “frequency related” entropy, and which is valid for all stationary processes. The method is believed valid only in the case of discrete spectra. In the second part of the paper, a method of estimating continuous spectra in the presence of noise is presented, which makes use of the Mutual Information Principle (MIP). Although the method proceeds smoothly in mathematical terms, there appear to be some difficulties in interpreting the physical meaning of some of the expressions. Examples in the use of both methods are presented, for the usual practical problem of estimating a power spectrum for a process whose autocorrelation function is partially known a priori.

Details

Kybernetes, vol. 10 no. 2
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 1 April 1987

RALLIS C. PAPADEMETRIOU, THOMAS J. KETSEOGLOU and NICOLAOS S. TZANNES

Multiple Information Principle (MIP) is reviewed as a method of assigning a prior probability mass of density function to a random variable in the presence of some prior…

64

Abstract

Multiple Information Principle (MIP) is reviewed as a method of assigning a prior probability mass of density function to a random variable in the presence of some prior information. It is compared to the Maximum Information (MI) method and shown to be more general and inclusive of prior data available to the investigator. The image restoration problem is outlined as an inverse source problem with insufficient data for yielding a unique solution.

Details

Kybernetes, vol. 16 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Access Restricted. View access options
Article
Publication date: 1 April 1976

NICOLAOS S. TZANNES

The Rate Distortion Theory is a branch of the Information Theory applicable to the case when the entropy of the source exceeds the capacity of the Channel. A rate distortion…

424

Abstract

The Rate Distortion Theory is a branch of the Information Theory applicable to the case when the entropy of the source exceeds the capacity of the Channel. A rate distortion function R(D) is defined between the input and output alphabets X, Y of a channel. It can be shown that it is possible to design a communication system which achieves a fidelity D when the capacity of the channel C is greater than R(D). In this paper, the formulation of the Rate Distortion Theory is used for the problem of derived probability models. The variables X, Y and the Channel are given new interpretations, and the result is an ability to pick a derived probability model for Y when X is of a known probability structure. The fidelity criterion assumes the rle of an error function in this terminology. Two specific cases are discussed.

Details

Kybernetes, vol. 5 no. 4
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 1 April 1978

N.S. TZANNES

A simple extension of the concept of the inner product leads to orthonormal expansions of time‐shifted signals with coefficients dependent on the shift variable. It is shown that…

21

Abstract

A simple extension of the concept of the inner product leads to orthonormal expansions of time‐shifted signals with coefficients dependent on the shift variable. It is shown that such expansions have their counterparts of Parseval's identity and Bessel inequality. The Projection Theorem holds, and a version of Mercer's theorem and Karhumen—Loeve's expansion are also shown to hold, in a non‐stochastic regime. The approach leads to new interpretations of time correlation functions and Fourier Series expansions.

Details

Kybernetes, vol. 7 no. 4
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 1 February 1983

T.G. AVGERIS

The Mutual Information Princip le (MIP) has already been used in various areas, as a generalization of the Maximum Entropy Principle (MEP), in the very common situation where our…

45

Abstract

The Mutual Information Princip le (MIP) has already been used in various areas, as a generalization of the Maximum Entropy Principle (MEP), in the very common situation where our measurements of a random variable contain errors having some known average value. An axiomatic derivation of the MIP is given below, in order to place it in a rigorous mathematical framework with the least possible intuitive arguments. The procedure followed is similar to the one proposed by Shore and Johnson for the Minimum Cross‐entropy Principle, and some relationships between the two methods of inductive inference are pointed out.

Details

Kybernetes, vol. 12 no. 2
Type: Research Article
ISSN: 0368-492X

Access Restricted. View access options
Article
Publication date: 1 January 1983

THEODORE G. AVGERIS

The problem of modeling the performance distributions of queueing systems, on the basis of partial knowledge of the service time distribution, is examined from an information…

113

Abstract

The problem of modeling the performance distributions of queueing systems, on the basis of partial knowledge of the service time distribution, is examined from an information theory point of view. A new method is proposed, based on the Mutual Information Principle (MIP) which generalizes the Maximum Entropy Principle (MEP) approach proposed by Shore. An example is given to illustrate the method and its advantages are discussed.

Details

Kybernetes, vol. 12 no. 1
Type: Research Article
ISSN: 0368-492X

1 – 10 of 16
Per page
102050