Tongyang Zhang, Fang Tan, Chao Yu, Jiexun Wu and Jian Xu
Proper topic selection is an essential prerequisite for the success of research. To study this, this article proposes an important concerned factor of topic selection-topic…
Abstract
Purpose
Proper topic selection is an essential prerequisite for the success of research. To study this, this article proposes an important concerned factor of topic selection-topic popularity, to examine the relationship between topic selection and team performance.
Design/methodology/approach
The authors adopt extracted entities on the type of gene/protein, which are used as proxies as topics, to keep track of the development of topic popularity. The decision tree model is used to classify the ascending phase and descending phase of entity popularity based on the temporal trend of entity occurrence frequency. Through comparing various dimensions of team performance – academic performance, research funding, relationship between performance and funding and corresponding author's influence at different phases of topic popularity – the relationship between the selected phase of topic popularity and academic performance of research teams can be explored.
Findings
First, topic popularity can impact team performance in the academic productivity and their research work's academic influence. Second, topic popularity can affect the quantity and amount of research funding received by teams. Third, topic popularity can impact the promotion effect of funding on team performance. Fourth, topic popularity can impact the influence of the corresponding author on team performance.
Originality/value
This is a new attempt to conduct team-oriented analysis on the relationship between topic selection and academic performance. Through understanding relationships amongst topic popularity, team performance and research funding, the study would be valuable for researchers and policy makers to conduct reasonable decision making on topic selection.
Details
Keywords
The emergence of mobile health (mHealth) products has created a capability of monitoring and managing the health of patients with chronic diseases. These mHealth technologies…
Abstract
Purpose
The emergence of mobile health (mHealth) products has created a capability of monitoring and managing the health of patients with chronic diseases. These mHealth technologies would not be beneficial unless they are adopted and used by their target users. This study identifies key factors affecting the usage of mHealth apps based on user usage data collected from an mHealth app.
Design/methodology/approach
Using a dataset collected from an mHealth app named mPower, developed for patients with Parkinson's disease (PD), this paper investigated the effects of disease diagnosis, disease progression and mHealth app difficulty level on app usage, while controlling for user information. App usage is measured by five different activity counts of the app.
Findings
The results across five measures of mHealth app usage vary slightly. On average, previous professional diagnosis and high user performance scores encourage user participation and engagement, while disease progression hinders app usage.
Research limitations/implications
The findings potentially provide insights into better design and promotion of mHealth products and improve the capability of health management of patients with chronic diseases.
Originality/value
Studies on the mHealth app usage are critical but sparse because large-scale and reliable mHealth app usage data are limited. Unlike earlier works based solely on survey data, this research used a large user usage data collected from an mHealth app to study key factors affecting app usage. The methods presented in this study can serve as a pioneering work for the design and promotion of mHealth technologies.
Details
Keywords
David Martín-Moncunill, Miguel-Ángel Sicilia-Urban, Elena García-Barriocanal and Salvador Sánchez-Alonso
Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the…
Abstract
Purpose
Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the positive effects of these systems. The purpose of this paper is to systematically evaluate the degree of domain specificity of the AGROVOC controlled vocabulary terms as a representative of a large terminology in the agricultural domain and discuss the generic/specific boundaries across its hierarchy.
Design/methodology/approach
A user-oriented study with domain-experts in conjunction with quantitative and systematic analysis. First an in-depth analysis of AGROVOC was carried out to make a proper selection of terms for the experiment. Then domain-experts were asked to classify the terms according to their domain specificity. An evaluation was conducted to analyse the domain-experts’ results. Finally, the resulting data set was automatically compared with the terms in SUMO, an upper ontology and MILO, a mid-level ontology; to analyse the coincidences.
Findings
Results show the existence of a high number of generic terms. The motivation for several of the unclear cases is also depicted. The automatic evaluation showed that there is not a direct way to assess the specificity degree of a term by using SUMO and MILO ontologies, however, it provided additional validation of the results gathered from the domain-experts.
Research limitations/implications
The “domain-analysis” concept has long been discussed and it could be addressed from different perspectives. A resume of these perspectives and an explanation of the approach followed in this experiment is included in the background section.
Originality/value
The authors propose an approach to identify the domain specificity of terms in large domain-specific terminologies and a criterion to measure the overall domain specificity of a knowledge organisation system, based on domain-experts analysis. The authors also provide a first insight about using automated measures to determine the degree to which a given term can be considered domain specific. The resulting data set from the domain-experts’ evaluation can be reused as a gold standard for further research about these automatic measures.