L. Pardo, D. Morales and I.J. Taneja
Fisher’s amount of information is the most parametric measure in the literature of statistics. However, not for every family of probability density functions do the well‐known…
Abstract
Fisher’s amount of information is the most parametric measure in the literature of statistics. However, not for every family of probability density functions do the well‐known regularity assumptions hold. To avoid this problem, several parametric measures have been proposed on the basis of divergence measures. In this work, parametric measures of information are obtained on the basis of the generalized Jensen difference divergence measures. When the regularity assumptions hold, their relations with Fisher’s amount of information are also studied.
Details
Keywords
The decision rule which minimizes the probability of error, in the discrimination problem, is the Bayes decision rule which assigns x to the class with the highest a posteriori…
Abstract
The decision rule which minimizes the probability of error, in the discrimination problem, is the Bayes decision rule which assigns x to the class with the highest a posteriori probability. This rule leads to a partial probability of error which is given by Pe(x) = 1−max p(C2lx) for each x e X. Prior to observing X, the probability of error associated with X is defined as Pe = EX [Pe(x)]. Tanaka, Okuda and Asai formulated the discrimination problem with fuzzy classes and fuzzy information using the probability of fuzzy events and derived a bound for the average error probability, when the decision in the classifier is made according to the fuzzified Bayes method. The aim is to obtain bounds for the average error probability in terms of (αβ)‐information energy, when the decision in the classifier is made according to the fuzzified Bayes method.
Details
Keywords
Esteban, J.A. Pardo, M.C. Pardo and M.L. Vicente
Several coefficients, called divergences, have been suggested in the statistical literature to reflect the fact that some probability distributions are “closer together” than…
Abstract
Several coefficients, called divergences, have been suggested in the statistical literature to reflect the fact that some probability distributions are “closer together” than others and consequently that it may be easier to distinguish between the distributions of one pair than between those of another. When comparing three biological populations, it is often interesting to measure how two of them “move apart” from the third. Deals with the statistical analysis of this problem by means of bivariate divergence statistics. Provides a unified study, depicting the behaviour and relative merits of traditional divergences, by using the (h,ø), divergence family of statistics introduced by Menéndez et al.
Details
Keywords
Esteban and D. Morales
Uses a unified expression, called Hh,vφ1φ2 entropy to study the asymptotic properties of entropy estimates. Shows that the asymptotic distribution of entropy estimates, in a…
Abstract
Uses a unified expression, called Hh,vφ1φ2 entropy to study the asymptotic properties of entropy estimates. Shows that the asymptotic distribution of entropy estimates, in a stratified random sampling set‐up, is normal. Based on the asymptotic precision of entropy estimates, optimum sample size allocations are developed under various constraints. Gives the relative precision of stratified and simple random sampling. Also provides applications to test statistical hypotheses and to build confidence intervals.
Details
Keywords
Consideration is given to the problem of optimally choosing of a fixed number of experiments in sequential form from a class of available experiments. The applications of �…
Abstract
Consideration is given to the problem of optimally choosing of a fixed number of experiments in sequential form from a class of available experiments. The applications of ø entropy measure in the sequential design of experiments is studied by defining ø terminal entropy. Finally, the process is established when a sufficient experiment exists. An illustrative example, which demonstrates the usefulness of the results obtained, is included.
Details
Keywords
The theory of possibility (Zadeh, Sugeno) and the theory of relative information (Jumarie) both aim to deal with the meaning of information, but their mathematical frameworks are…
Abstract
The theory of possibility (Zadeh, Sugeno) and the theory of relative information (Jumarie) both aim to deal with the meaning of information, but their mathematical frameworks are quite different. In the first approach, possibility is described either by fuzziness (Zadeh) or by generalized measures (Sugeno), and in the second, possibility is obtained as the result of observing probability via an observation process with informational invariance. Shows that a combination of (classical) information theory with generalized maximum likelihood via geometric programming exhibits a link between relative information, fuzziness and possibility. Some consequences are outlined.
Details
Keywords
M.L. Menéndez, J.A. Pardo, L. Pardo and M.C. Pardo
Read (1984) presented an asymptotic expansion for the distribution function of the power divergence statistics whose speed of convergence is dependent on the parameter of the…
Abstract
Read (1984) presented an asymptotic expansion for the distribution function of the power divergence statistics whose speed of convergence is dependent on the parameter of the family. Generalizes that result by considering the family of (h, φ)‐divergence measures. Considers two other closer approximations to the exact distribution. Compares these three approximations for the Renyi’s statistic in small samples.
Details
Keywords
Gerry Frizelle and Ivian Casali
The purpose of this paper is to look at how novel measures of supply chain performance can be used to identify unnecessary waste in terms of under-loaded vehicles and extended…
Abstract
Purpose
The purpose of this paper is to look at how novel measures of supply chain performance can be used to identify unnecessary waste in terms of under-loaded vehicles and extended delivery times, along with their causes. In particular it focuses on problems that can be tackled without the need for capital expenditure. The measures go under the collective name of “turbulence”. This represents the chain deviating from its goals. Quantifying unnecessary waste then allows unnecessary carbon emissions to be estimated while pointing to what changes will have the biggest impact. The measures have been used by three companies and some early results are provided.
Design/methodology/approach
The approach was first to use evidence from the literature to show the value of having a new measure. Next the creation of one specific new measure, called relative turbulence, a relative measure for the more general concept of turbulence. Third testing it in the field with data from companies. Then showing how carbon emissions can be derived.
Findings
The first finding is that the analysis can pinpoint sources of unnecessary emissions. Second the results suggest excessive emissions arise both though poor planning and poor practice. Third that there is a need for two models – from the users’ viewpoint and the carriers’ viewpoint. Finally the approach can be used with field data that is currently available, thus avoiding expensive one-off studies.
Research limitations/implications
The main research implication is that entropic measures are useful and can provide fresh insights. Being generic they may be applicable in other contexts. However, they can be mathematically tricky to use.
Practical implications
The analysis has been tested in companies and findings are included in the paper. They provide an insight that is not available solely from current measures. Businesses cannot only measure emissions but start to pinpoint causes.
Originality/value
The main areas of original contributions are in the introduction of a new measure, based on entropic principles, particularly the one called relative turbulence. The second is juxtaposing this measure with standard measures to gain new insights. Finally the idea that supply networks can be built from, what is called the irreducible chain.
Details
Keywords
Ram B. Ramachandran and Chabi Gupta
Purpose: There has been a growing debate on whether generative AI can serve as the modern-day equivalent of white-collar knowledge workers. In a recent post, technology magnate…
Abstract
Purpose: There has been a growing debate on whether generative AI can serve as the modern-day equivalent of white-collar knowledge workers. In a recent post, technology magnate Bill Gates boldly proclaimed that ChatGPT would soon become the quintessential white-collar worker of tomorrow (Dean, 2023). This is indeed an exciting prospect, as generative AI advances at breakneck speed.
Need for the Study: This research delves upon the implications of such advancements for industries reliant on skilled employees. It raises questions about how these individuals will adjust their skillset going forward, given the proliferation of generative AI solutions poised to disrupt traditional roles previously occupied by humans.
Methodology: The study uses an exploratory framework to understand AI’s implications on job roles, productivity, and skill requirements. It introduces generative AI and its relevance, focusing on how it could transform white-collar jobs.
Findings: One thing seems clear: its impact on future employment opportunities. However, this technology still has limitations, potentially leading to unintended consequences. While capable of performing certain functions precisely and accurately, it cannot fully replace the reasoning abilities or cognitive flexibility innate in human workers tasked with knowledge-based work.
Practical Implications: The potential implications for workforce development, policy-making, and future research in AI and labor economics are highlighted. This will also help gain insights into the integration process, benefits, challenges, and the changing nature of white-collar work due to generative AI.
Details
Keywords
Mukul, Sanjay Taneja, Ercan Özen and Neha Bansal
Introduction: Skill development is crucial in developing economies by enhancing productivity and creating employment opportunities. At the macro level, it also leads to industrial…
Abstract
Introduction: Skill development is crucial in developing economies by enhancing productivity and creating employment opportunities. At the macro level, it also leads to industrial development and economic growth.
Purpose: The research is to identify the types of skills required for increasing the probability of employability of labour. It also aims to define the challenges and opportunities in skill development to drive change.
Need of the Study: Studying opportunities and challenges for skill development in developing economies is essential for achieving sustainable economic growth, reducing poverty, increasing employment opportunities, and promoting global competitiveness.
Research Methodology: Some skills are recognised through research that has been published to determine the skill set needed to increase labour productivity. To draw lessons, some skill development initiatives by various companies are also identified and presented in case studies. Additionally, several government programs are available to assess the possibilities and prospects for skill development in the Indian market.
Practical Implications: The research will be valuable in micro and macro decision making. At the micro level, research is advantageous for a business person to initiate the skill development of its employees by using government schemes. Nations other than India can understand the policy framework for skill development.
Findings: The term ‘skilling’ has become fashionable. Due to the need for skill-based earnings data, only some studies examine the return on skill (ROS) of the labour market. Skill development plays a significant role in bringing change at the micro and macro levels. Hence it is necessary to exploit all opportunities for skill development.