Search results
1 – 10 of 12Somi Lee, Howook (Sean) Chang and Meehee Cho
Crowdsourcing food delivery represents great potential for future development and expansion of the restaurant business. Accordingly, job performance and retention of delivery…
Abstract
Purpose
Crowdsourcing food delivery represents great potential for future development and expansion of the restaurant business. Accordingly, job performance and retention of delivery workers are critical for success. Therefore, this paper aims to investigate how to enhance crowdsourced delivery workers’ job performance and intent to continue working by applying the sociotechnical systems theory.
Design/methodology/approach
The data analysis was conducted using responses obtained from crowdsourced food delivery workers. A structural equation model was developed to verify the hypothesized relationships. To test the proposed moderating roles of a three-dimensional concept of social capital within the research model, multi-group analyses were implemented.
Findings
This study confirmed the significant relationships between crowdsourcing risks related to workers’ low job commitment and technical systems, attributing to reduced job performance and intent to continue working. Results documented that social systems including networks, trust and shared vision mitigated the negative impact of the perceived difficulty and complexity of technical systems and job performance.
Originality/value
Although technology has contributed significantly to the effectiveness of online food delivery, the literature has mainly focused on its benefits and has ignored the critical aspects derived from a virtual and technology-based workplace. This gap was addressed by verifying the important roles of social factors (networks, trust and shared visions) in reducing the negative impacts of technology-driven risks (perceived difficulty of task requirements and technology complexity) within the crowdsourcing food delivery context.
Details
Keywords
Issam Bendib, Mohamed Ridda Laouar, Richard Hacken and Mathew Miles
The overwhelming speed and scale of digital media production greatly outpace conventional indexing methods by humans. The management of Big Data for e-library speech resources…
Abstract
Purpose
The overwhelming speed and scale of digital media production greatly outpace conventional indexing methods by humans. The management of Big Data for e-library speech resources requires an automated metadata solution. The paper aims to discuss these issues.
Design/methodology/approach
A conceptual model called semantic ontologies for multimedia indexing (SOMI) allows for assembly of the speech objects, encapsulation of semantic associations between phonic units and the definition of indexing techniques designed to invoke and maximize the semantic ontologies for indexing. A literature review and architectural overview are followed by evaluation techniques and a conclusion.
Findings
This approach is only possible because of recent innovations in automated speech recognition. The introduction of semantic keyword spotting allows for indexing models that disambiguate and prioritize meaning using probability algorithms within a word confusion network. By the use of AI error-training procedures, optimization is sought for each index item.
Research limitations/implications
Validation and implementation of this approach within the field of digital libraries still remain under development, but rapid developments in technology and research show rich conceptual promise for automated speech indexing.
Practical implications
The SOMI process has been preliminarily tested, showing that hybrid semantic-ontological approaches produce better accuracy than semantic automation alone.
Social implications
Even as testing proceeds on recorded conference talks at the University of Tebessa (Algeria), other digital archives can look toward similar indexing. This will mean greater access to sound file metadata.
Originality/value
Huge masses of spoken data, unmanageable for a human indexer, can prospectively find semantically sorted and prioritized indexing – not transcription, but generated metadata – automatically, quickly and accurately.
Details
Keywords
Nasim Abedimanesh, Alireza Ostadrahimi, Saeed Abedimanesh, Behrooz Motlagh and Mohammad Hossein Somi
The purpose of this study is to explore the association of serum retinol and number of circulating inflammatory cells and disease activity in patients with ulcerative colitis.
Abstract
Purpose
The purpose of this study is to explore the association of serum retinol and number of circulating inflammatory cells and disease activity in patients with ulcerative colitis.
Design/methodology/approach
A total of 60 patients with ulcerative colitis were enrolled in a cross-sectional pilot study. Patients were recruited from specialized clinic of Tabriz University of Medical Sciences, Iran between April and August 2015. Mayo clinic index was used to assess clinical disease activity score. Blood samples were collected. Serum retinol was assessed using HPLC to determine vitamin A status. Complete blood count and lymphocyte phenotyping were performed by automated hematology analyzer and flow-cytometric analysis, respectively.
Findings
According to Mayo scoring, 68.33 per cent of patients had mild and 31.66 per cent had moderate or severe disease activity. About 43.33 per cent of patients were vitamin A deficient, with 23.33 per cent having moderate to severe deficiency (serum retinol < 20 µg/dl). Lower levels of serum retinol and higher count and percentages of CD3+, CD8+ T cells and neutrophil to lymphocyte ratio were statistically associated with disease activity according to univariate analysis (p = 0.002, 0.037, <0.001, 0.031, 0.002 and 0.039); however, in binary logistic regression, only lower levels of serum retinol were independently associated with disease activity with a OR of 0.564 (p = 0.021; 95 per cent CI 0.35-0.92).
Originality/value
Vitamin A deficiency was detected in this study population. Patients with moderate to severe disease activity demonstrated lower serum retinol, higher CD8+ T cells and neutrophil to lymphocyte ratio compared to patients with mild disease activity.
Details
Keywords
Hossein Sohrabi and Esmatullah Noorzai
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the…
Abstract
Purpose
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the revision step.
Design/methodology/approach
The cases were extracted by studying 68 water-related projects. This research employs earned value management (EVM) factors to consider time and cost features and economic, natural, technical, and project risks to account for uncertainties and supervised learning models to estimate cost overrun. Time-series algorithms were also used to predict construction cost indexes (CCI) and model improvements in future forecasts. Outliers were deleted by the pre-processing process. Next, datasets were split into testing and training sets, and algorithms were implemented. The accuracy of different models was measured with the mean absolute percentage error (MAPE) and the normalized root mean square error (NRSME) criteria.
Findings
The findings show an improvement in the accuracy of predictions using datasets that consider uncertainties, and ensemble algorithms such as Random Forest and AdaBoost had higher accuracy. Also, among the single algorithms, the support vector regressor (SVR) with the sigmoid kernel outperformed the others.
Originality/value
This research is the first attempt to develop a case-based reasoning model based on various risks and uncertainties. The developed model has provided an approving overlap with machine learning models to predict cost overruns. The model has been implemented in collected water-related projects and results have been reported.
Details
Keywords
Khurram Shahzad and Shakeel Ahmad Khan
Major objective of the instant study was to investigate the factors affecting the adoption of integrated semantic digital libraries (SDLs). It attempted to find out the challenges…
Abstract
Purpose
Major objective of the instant study was to investigate the factors affecting the adoption of integrated semantic digital libraries (SDLs). It attempted to find out the challenges faced in implementing semantic technologies in digital libraries. This study also aimed to develop a framework to provide practical solutions to efficiently adopt semantic digital library systems to offer richer data and services.
Design/methodology/approach
To meet the formulated objectives of the study, a systematic literature review was conducted. The authors adhered to the “Preferred Reporting Items for the Systematic Review and Meta-analysis” (PRISMA) guidelines as a research method. The data were retrieved from different tools and databases. In total, 35 key studies were included for systematic review after having applied standard procedures.
Findings
The findings of the study indicated that SDLs are highly significant as they offered context-based information resources. Interoperability of the systems, advancement in bilateral transfer modules, machine-controlled indexing, and folksonomy were key factors in developing semantic digital libraries. The study identified five different types of challenges to build an integrated semantic digital library system. These challenges included ontologies and interoperability, development of a suitable model, diversity in language, lack of skilled human resources, and other technical issues.
Originality/value
This paper provided a framework that is based on practical solutions as a benchmark for policymakers to devise formal standards for the initiation to develop integrated semantic digital libraries.
Details
Keywords
Tressy Thomas and Enayat Rajabi
The primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel…
Abstract
Purpose
The primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?
Design/methodology/approach
The review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.
Findings
This study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.
Originality/value
It is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.
Details
Keywords
Shohreh SeyyedHosseini, Asefeh Asemi, Ahmad Shabani and Mozafar CheshmehSohrabi
According to the studies conducted in Iran, the breast cancer is the most frequent type of cancer among women. This study aimed to explore the state of health information supply…
Abstract
Purpose
According to the studies conducted in Iran, the breast cancer is the most frequent type of cancer among women. This study aimed to explore the state of health information supply and demand on breast cancer among Iranian medical researchers and Iranian Web users from 2011 to 2015.
Design/methodology/approach
A mixed method research is conducted in this study. In qualitative part, a focus group interview is applied to the users to identify their selected keywords searched for breast cancer in Google. The collected data are analyzed using Open Code software. In quantitative part, data are synthesized using the R software in two parts. First, users’ internet information-seeking behavior (ISB) is analyzed using the Google Trends outputs from 2011 to 2015. Second, the scientific publication behavior of Iranian breast cancer specialists are surveyed using PubMed during the period of the study.
Findings
The results show that the search volume index of preferred keywords on breast cancer has increased from 4,119 in 2011 to 4,772 in 2015. Also, the findings reveal that Iranian scholars had 873 scientific papers on breast cancer in PubMed from 2011 to 2015. There was a significant and positive relationship between Iranian ISB in the Google Trends and SPB of Iranian scholars on breast cancer in PubMed.
Research limitations/implications
This study investigates only the state of health information supply and demand in PubMed and Google Trends and not additional databases often used for medical studies and treatment.
Originality/value
This study provides a road map for health policymakers in Iran to direct the breast cancer studies.
Details
Keywords
Hamid Hassani, Azadeh Mohebi, M.J. Ershadi and Ammar Jalalimanesh
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video…
Abstract
Purpose
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video indexing. As lecture video indexing involves various steps, the proposed framework containing new dimensions, introduces new integrated approach for evaluating an indexing method or algorithm from the beginning to the end.
Design/methodology/approach
The emphasis in this study is on the fifth step of design science research methodology (DSRM), known as evaluation. That is, the methods that are developed in the field of lecture video indexing as an artifact, should be evaluated from different aspects. In this research, nine dimensions of data quality including accuracy, value-added, relevancy, completeness, appropriate amount of data, concise, consistency, interpretability and accessibility have been redefined based on previous studies and nominal group technique (NGT).
Findings
The proposed dimensions are implemented as new metrics to evaluate a newly developed lecture video indexing algorithm, LVTIA and numerical values have been obtained based on the proposed definitions for each dimension. In addition, the new dimensions are compared with each other in terms of various aspects. The comparison shows that each dimension that is used for assessing lecture video indexing, is able to reflect a different weakness or strength of an indexing method or algorithm.
Originality/value
Despite development of different methods for indexing lecture videos, the issue of data quality and its various dimensions have not been studied. Since data with low quality can affect the process of scientific lecture video indexing, the issue of data quality in this process requires special attention.
Details
Keywords
Seyed Jalil Masoumi, Ali Kohanmoo, Mohammad Ali Mohsenpour, Sanaz Jamshidi and Mohammad Hassan Eftekhari
Normal-weight obesity (NWO), characterized by normal body mass index (BMI) but excess body fat, is a potential contributor to chronic diseases. This study aims to assess the…
Abstract
Purpose
Normal-weight obesity (NWO), characterized by normal body mass index (BMI) but excess body fat, is a potential contributor to chronic diseases. This study aims to assess the relationship between this phenomenon and some metabolic factors in a population of Iranian employees.
Design/methodology/approach
This cross-sectional study was conducted on Iranian employees from the baseline data of Employees Health Cohort Study, Shiraz, Iran. Anthropometric measures, including weight, height, waist circumference and percentage of body fat, were obtained from the cohort database. The participants were divided into three groups: healthy, normal-weight obese and overweight/obese. Metabolic variables including blood pressure, fasting blood sugar, lipid profile, liver function enzymes and metabolic syndrome were assessed in relation to the study groups.
Findings
A total of 985 participants aged 25–64 years were included. Males with NWO had significantly higher alanine aminotransferase (ALT) levels compared to the healthy group in the fully adjusted model. Also, high-density lipoprotein (HDL) was significantly lower among females with overweight/obesity than healthy group when adjusted for age and energy intake. Furthermore, after adjusting for age and energy intake, both genders in the overweight/obese group showed significantly elevated systolic and diastolic blood pressure, while this was not observed for the NWO group. Lastly, metabolic syndrome was more prevalent in NWO as well as overweight/obesity.
Originality/value
These findings further encourage identification of excess body fat, even in normal-weight individuals, to prevent chronic metabolic diseases. Special attention should be paid to subgroups with sedentary occupations, as they may be at increased risk for NWO-related health issues.
Details