Search results
1 – 2 of 2Alexander Serenko and Nick Bontis
This study explores the use and perceptions of scholarly journal ranking lists in the management field based on stakeholders’ lived experience.
Abstract
Purpose
This study explores the use and perceptions of scholarly journal ranking lists in the management field based on stakeholders’ lived experience.
Design/methodology/approach
The results are based on a survey of 463 active knowledge management and intellectual capital researchers.
Findings
Journal ranking lists have become an integral part of contemporary management academia: 33% and 37% of institutions and individual scholars employ journal ranking lists, respectively. The Australian Business Deans Council (ABDC) Journal Quality List and the UK Academic Journal Guide (AJG) by the Chartered Association of Business Schools (CABS) are the most frequently used national lists, and their influence has spread far beyond the national borders. Some institutions and individuals create their own journal rankings.
Practical implications
Management researchers employ journal ranking lists under two conditions: mandatory and voluntary. The forced mode of use is necessary to comply with institutional pressure that restrains the choice of target outlets. At the same time, researchers willingly consult ranking lists to advance their personal career, maximize their research exposure, learn about the relative standing of unfamiliar journals, and direct their students. Scholars, academic administrators, and policymakers should realize that journal ranking lists may serve as a useful tool when used appropriately, in particular when individuals themselves decide how and for what purpose to employ them to inform their research practices.
Originality/value
The findings reveal a journal ranking lists paradox: management researchers are aware of the limitations of ranking lists and their deleterious impact on scientific progress; however, they generally find journal ranking lists to be useful and employ them.
Details
Keywords
Olga Blasco-Blasco, Márton Demeter and Manuel Goyanes
The purpose of this article is to theoretically outline and empirically test two contribution-based indicators: (1) the scholars' annual contribution-based measurement and (2…
Abstract
Purpose
The purpose of this article is to theoretically outline and empirically test two contribution-based indicators: (1) the scholars' annual contribution-based measurement and (2) the annual contribution modified h-index, computing six criteria: total number of papers, computed SCImago Journal Rank values, total number of authors, total number of citations of a scholar’s work, number of years since paper publication and number of annual paper citations.
Design/methodology/approach
Despite widespread scholarly agreement about the relevance of research production in evaluation and recruitment processes, the proposed mechanisms for gauging publication output are still rather elementary, consequently obscuring each individual scholar’s contributions. This study utilised the Technique for Order of Preference by Similarity to Ideal Solution method, and the authors built two indicators to value author's contribution.
Findings
To test both indicators, this study focussed on the most productive scholars in communication during a specific time period (2017–2020), ranking their annual research contribution and testing it against standard productivity measures (i.e. number of papers and h-index).
Originality/value
This article contributes to current scientometric studies by addressing some of the limitations of aggregate-level measurements of research production, providing a much-needed understanding of scholarly productivity based on scholars' actual contribution to research.
Details