Search results

1 – 10 of over 4000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Book part
Publication date: 14 October 2009

Rune Elvik, Alena Høye, Truls Vaa and Michael Sørensen

Abstract

Details

The Handbook of Road Safety Measures
Type: Book
ISBN: 978-1-84855-250-0

Access Restricted. View access options
Article
Publication date: 29 November 2022

H.D. Arora and Anjali Naithani

The purpose of this paper is to create a numerical technique to tackle the challenge of selecting software reliability growth models (SRGMs).

96

Abstract

Purpose

The purpose of this paper is to create a numerical technique to tackle the challenge of selecting software reliability growth models (SRGMs).

Design/methodology/approach

A real-time case study with five SRGMs tested against a set of four selection indexes were utilised to show the functionality of TOPSIS approach. As a result of the current research, rating of the different SRGMs is generated based on their comparative closeness.

Findings

An innovative approach has been developed to generate the current SRGMs selection under TOPSIS environment by blending the entropy technique and the distance-based approach.

Originality/value

In any multi-criteria decision-making process, ambiguity is a crucial issue. To deal with the uncertain environment of decision-making, various devices and methodologies have been explained. Pythagorean fuzzy sets (PFSs) are perhaps the most contemporary device for dealing with ambiguity. This article addresses novel tangent distance-entropy measures under PFSs. Additionally, numerical illustration is utilized to ascertain the strength and authenticity of the suggested measures.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Book part
Publication date: 30 May 2013

Andreas Schwab and William H. Starbuck

This chapter reports on a rapidly growing trend in data analysis – analytic comparisons between baseline models and explanatory models. Baseline models estimate values for the…

Abstract

This chapter reports on a rapidly growing trend in data analysis – analytic comparisons between baseline models and explanatory models. Baseline models estimate values for the dependent variable in the absence of hypothesized causal effects. Thus, the baseline models discussed in this chapter differ from the baseline models commonly used in sequential regression analyses.Baseline modelling entails iteration: (1) Researchers develop baseline models to capture key patterns in the empirical data that are independent of the hypothesized effects. (2) They compare these patterns with the patterns implied by their explanatory models. (3) They use the derived insights to improve their explanatory models. (4) They iterate by comparing their improved explanatory models with modified baseline models.The chapter draws on methodological literature in economics, applied psychology, and the philosophy of science to point out fundamental features of baseline modelling. Examples come from research in international business and management, emerging market economies and developing countries.Baseline modelling offers substantial advantages for theory development. Although analytic comparisons with baseline models originated in some research fields as early as the 1960s, they have not been widely discussed or applied in international management. Baseline modelling takes a more inductive and iterative approach to modelling and theory development. Because baseline modelling holds substantial potential, international-management scholars should explore its opportunities for advancing scientific progress.

Details

Philosophy of Science and Meta-Knowledge in International Business and Management
Type: Book
ISBN: 978-1-78190-713-9

Access Restricted. View access options
Article
Publication date: 7 February 2025

Jasmine Maani, A. Dunstan Rajkumar and Nirakar Barik

Mergers and acquisitions (M&A) are increasingly being adopted as a strategic approach to consolidating financial institutions and banks, with a focus on enhancing capital…

25

Abstract

Purpose

Mergers and acquisitions (M&A) are increasingly being adopted as a strategic approach to consolidating financial institutions and banks, with a focus on enhancing capital strength, broadening business operations and maintaining financial stability. Operational efficiency within the banking sector is crucial for effective functioning and delivering quality services to customers. This study analyzes the efficiency of large-scale mergers involving several Indian public sector banks announced between 2016 and 2019.

Design/methodology/approach

The study used Data Envelopment Analysis (DEA), Logistic Regression, Malmquist Productivity Index (MPI) and Stochastic Frontier Analysis for the analysis purpose.

Findings

The results from DEA indicate that the average efficiency of merged public sector banks improved post-merger, with four out of six banks achieving technical efficiency in the post-merger period. However, efficiency varied, with OTE scores ranging from 65.8 to 100%. The SFA analysis shows that loanable funds are key drivers of both interest and non-interest income, while significant inefficiencies, particularly in labor, require attention. Physical capital plays a secondary role in income generation. The Malmquist productivity index analysis reveals a 1.6% average productivity growth in the post-merger year Y+1, driven by technological change, with positive TFP in Y+1 and Y+2 and a decline in Y+3. Only four of the six merged banks, namely Bank of Baroda, Union Bank of India, Canara Bank and Punjab National Bank, achieved positive TFP growth, primarily due to improvement in technical efficiency. Additionally, the logistic regression analysis indicates that asset quality and size have statistically significant regression coefficients in predicting operational technical efficiency (OTE).

Originality/value

This paper will contribute to the existing literature of banking, mergers and acquisitions and financial economies.

Details

Journal of Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-3585

Keywords

Access Restricted. View access options
Book part
Publication date: 1 August 2012

Andreas Schwab and William H. Starbuck

Purpose – This chapter reports on a rapidly growing trend in the analysis of data about emerging market (EM) economies – the use of baseline models as comparisons for explanatory…

Abstract

Purpose – This chapter reports on a rapidly growing trend in the analysis of data about emerging market (EM) economies – the use of baseline models as comparisons for explanatory models. Baseline models estimate expected values for the dependent variable in the absence of a hypothesized causal effect but set higher standards than do traditional null hypotheses tests that expect no effect.

Design/methodology/approach – Although the use of baseline models research originated in the 1960s, it has not been widely discussed, or even acknowledged, in the EM literature. We surveyed published EM studies to determine trends in the use of baseline models.

Findings – We categorize and describe the different types of baseline models that scholars have used in EM studies, and draw inferences about the differences between more effective and less effective uses of baseline models.

Value – We believe that comparisons with baseline models offer distinct methodological advantages for the iterative development of better explanatory models and a deeper understanding of empirical phenomena.

Details

West Meets East: Toward Methodological Exchange
Type: Book
ISBN: 978-1-78190-026-0

Keywords

Access Restricted. View access options
Article
Publication date: 23 September 2024

Bernardo Cerqueira de Lima, Renata Maria Abrantes Baracho, Thomas Mandl and Patricia Baracho Porto

Social media platforms that disseminate scientific information to the public during the COVID-19 pandemic highlighted the importance of the topic of scientific communication…

44

Abstract

Purpose

Social media platforms that disseminate scientific information to the public during the COVID-19 pandemic highlighted the importance of the topic of scientific communication. Content creators in the field, as well as researchers who study the impact of scientific information online, are interested in how people react to these information resources and how they judge them. This study aims to devise a framework for extracting large social media datasets and find specific feedback to content delivery, enabling scientific content creators to gain insights into how the public perceives scientific information.

Design/methodology/approach

To collect public reactions to scientific information, the study focused on Twitter users who are doctors, researchers, science communicators or representatives of research institutes, and processed their replies for two years from the start of the pandemic. The study aimed in developing a solution powered by topic modeling enhanced by manual validation and other machine learning techniques, such as word embeddings, that is capable of filtering massive social media datasets in search of documents related to reactions to scientific communication. The architecture developed in this paper can be replicated for finding any documents related to niche topics in social media data. As a final step of our framework, we also fine-tuned a large language model to be able to perform the classification task with even more accuracy, forgoing the need of more human validation after the first step.

Findings

We provided a framework capable of receiving a large document dataset, and, with the help of with a small degree of human validation at different stages, is able to filter out documents within the corpus that are relevant to a very underrepresented niche theme inside the database, with much higher precision than traditional state-of-the-art machine learning algorithms. Performance was improved even further by the fine-tuning of a large language model based on BERT, which would allow for the use of such model to classify even larger unseen datasets in search of reactions to scientific communication without the need for further manual validation or topic modeling.

Research limitations/implications

The challenges of scientific communication are even higher with the rampant increase of misinformation in social media, and the difficulty of competing in a saturated attention economy of the social media landscape. Our study aimed at creating a solution that could be used by scientific content creators to better locate and understand constructive feedback toward their content and how it is received, which can be hidden as a minor subject between hundreds of thousands of comments. By leveraging an ensemble of techniques ranging from heuristics to state-of-the-art machine learning algorithms, we created a framework that is able to detect texts related to very niche subjects in very large datasets, with just a small amount of examples of texts related to the subject being given as input.

Practical implications

With this tool, scientific content creators can sift through their social media following and quickly understand how to adapt their content to their current user’s needs and standards of content consumption.

Originality/value

This study aimed to find reactions to scientific communication in social media. We applied three methods with human intervention and compared their performance. This study shows for the first time, the topics of interest which were discussed in Brazil during the COVID-19 pandemic.

Details

Data Technologies and Applications, vol. 59 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Access Restricted. View access options
Book part
Publication date: 25 January 2017

Abstract

Details

Building Markets for Knowledge Resources
Type: Book
ISBN: 978-1-78635-742-7

Access Restricted. View access options
Article
Publication date: 30 August 2023

Nitin Arora and Shubhendra Jit Talwar

The fiscal outlay efficiency matters when the performance-based allocation of funds is made to state governments by the central government in a federal structure of an economy…

125

Abstract

Purpose

The fiscal outlay efficiency matters when the performance-based allocation of funds is made to state governments by the central government in a federal structure of an economy like India. Also the efficiency cannon of public expenditure is a key aspect in the field of public economics. Thus, a study to evaluate the efficiency in fiscal outlay of Indian states has been conducted.

Design/methodology/approach

The paper offers a three divisions–based paradigm under Network Data Envelopment Analysis framework to compare the performance of fiscal entities (say Indian state governments) in converting available fiscal resources into desired short-run and long-run growth and development objectives. The network efficiency score has been taken as a measure of the quality of fiscal outlay management that is trifurcated into divisional efficiencies representing budgeting process, fiscal outlay efficiency process and fiscal outlay effectiveness process.

Findings

It has been noticed that the states are under performing in achieving short-run growth targets and so the efficiency process division has been identified a major source of fiscal under performance. Suboptimum allocation of fiscal expenditure under various heads within the fiscal resources, as explained under budgeting process, is another major cause of fiscal under performance.

Practical implications

The study purposes a three divisions–based paradigm that takes into account efficiency of a state in (1) planning budget, (2) achieving short-run growth targets and (3) achieving long-run development targets. These three stages are named as budgeting process efficiency, fiscal outlay efficiency and fiscal outlay effectiveness, respectively. Therefore, a new paradigm called BEE paradigm is proposed to evaluate performance of fiscal entities in terms of fiscal outlay efficiency.

Originality/value

In existing literature on measuring efficiency of public expenditure, the public sector outputs have been made as function of fiscal expenditure as input treating the said outlay as an exogenous variable. In present context, the fiscal expenditure has been treated endogenous to the budgeting process. A high inefficiency on account of budgeting process supports this treatment too.

Details

Benchmarking: An International Journal, vol. 31 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Access Restricted. View access options
Book part
Publication date: 25 January 2017

Abstract

Details

Building Markets for Knowledge Resources
Type: Book
ISBN: 978-1-78635-742-7

Access Restricted. View access options
Book part
Publication date: 30 May 2018

Eliana Barrenho and Marisa Miraldo

This chapter aims at providing an understanding of the research and devlopment (R&D) process in the pharmaceutical industry, by exploring the methodological challenges and…

Abstract

This chapter aims at providing an understanding of the research and devlopment (R&D) process in the pharmaceutical industry, by exploring the methodological challenges and approaches in the assessment of the determinants of innovation in the pharmaceutical industry. It (i) discusses possible methodological approaches to model occurrence of events; (ii) describes in detail competing risks duration models as the best methodological option in light of the nature of pharmaceutical R&D processes and data; (iii) concludes with an estimation strategy and overview of potential covariates that have been found to correlate with the likelihood of failure of R&D pharmaceutical projects.

Details

Health Econometrics
Type: Book
ISBN: 978-1-78714-541-2

Keywords

1 – 10 of over 4000
Per page
102050