Özge H. Namlı, Seda Yanık, Aslan Erdoğan and Anke Schmeink
Coronary artery disease is one of the most common cardiovascular disorders in the world, and it can be deadly. Traditional diagnostic approaches are based on angiography, which is…
Abstract
Purpose
Coronary artery disease is one of the most common cardiovascular disorders in the world, and it can be deadly. Traditional diagnostic approaches are based on angiography, which is an interventional procedure having side effects such as contrast nephropathy or radio exposure as well as significant expenses. The purpose of this paper is to propose a novel artificial intelligence (AI) approach for the diagnosis of coronary artery disease as an effective alternative to traditional diagnostic methods.
Design/methodology/approach
In this study, a novel ensemble AI approach based on optimization and classification is proposed. The proposed ensemble structure consists of three stages: feature selection, classification and combining. In the first stage, important features for each classification method are identified using the binary particle swarm optimization algorithm (BPSO). In the second stage, individual classification methods are used. In the final stage, the prediction results obtained from the individual methods are combined in an optimized way using the particle swarm optimization (PSO) algorithm to achieve better predictions.
Findings
The proposed method has been tested using an up-to-date real dataset collected at Basaksehir Çam and Sakura City Hospital. The data of disease prediction are unbalanced. Hence, the proposed ensemble approach improves majorly the F-measure and ROC area which are more prominent measures in case of unbalanced classification. The comparison shows that the proposed approach improves the F-measure and ROC area results of the individual classification methods around 14.5% in average and diagnoses with an accuracy rate of 96%.
Originality/value
This study presents a low-cost and low-risk AI-based approach for diagnosing heart disease compared to traditional diagnostic methods. Most of the existing research studies focus on base classification methods. In this study, we mainly investigate an effective ensemble method that uses optimization approaches for feature selection and combining stages for the medical diagnostic domain. Furthermore, the approaches in the literature are commonly tested on open-access dataset in heart disease diagnoses, whereas we apply our approach on a real and up-to-date dataset.
Details
Keywords
Meltem Aksoy, Seda Yanık and Mehmet Fatih Amasyali
When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals…
Abstract
Purpose
When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals are primarily based on manual matching of similar topics, discipline areas and keywords declared by project applicants. When the number of proposals increases, this task becomes complex and requires excessive time. This paper aims to demonstrate how to effectively use the rich information in the titles and abstracts of Turkish project proposals to group them automatically.
Design/methodology/approach
This study proposes a model that effectively groups Turkish project proposals by combining word embedding, clustering and classification techniques. The proposed model uses FastText, BERT and term frequency/inverse document frequency (TF/IDF) word-embedding techniques to extract terms from the titles and abstracts of project proposals in Turkish. The extracted terms were grouped using both the clustering and classification techniques. Natural groups contained within the corpus were discovered using k-means, k-means++, k-medoids and agglomerative clustering algorithms. Additionally, this study employs classification approaches to predict the target class for each document in the corpus. To classify project proposals, various classifiers, including k-nearest neighbors (KNN), support vector machines (SVM), artificial neural networks (ANN), classification and regression trees (CART) and random forest (RF), are used. Empirical experiments were conducted to validate the effectiveness of the proposed method by using real data from the Istanbul Development Agency.
Findings
The results show that the generated word embeddings can effectively represent proposal texts as vectors, and can be used as inputs for clustering or classification algorithms. Using clustering algorithms, the document corpus is divided into five groups. In addition, the results demonstrate that the proposals can easily be categorized into predefined categories using classification algorithms. SVM-Linear achieved the highest prediction accuracy (89.2%) with the FastText word embedding method. A comparison of manual grouping with automatic classification and clustering results revealed that both classification and clustering techniques have a high success rate.
Research limitations/implications
The proposed model automatically benefits from the rich information in project proposals and significantly reduces numerous time-consuming tasks that managers must perform manually. Thus, it eliminates the drawbacks of the current manual methods and yields significantly more accurate results. In the future, additional experiments should be conducted to validate the proposed method using data from other funding organizations.
Originality/value
This study presents the application of word embedding methods to effectively use the rich information in the titles and abstracts of Turkish project proposals. Existing research studies focus on the automatic grouping of proposals; traditional frequency-based word embedding methods are used for feature extraction methods to represent project proposals. Unlike previous research, this study employs two outperforming neural network-based textual feature extraction techniques to obtain terms representing the proposals: BERT as a contextual word embedding method and FastText as a static word embedding method. Moreover, to the best of our knowledge, there has been no research conducted on the grouping of project proposals in Turkish.
Details
Keywords
Seda Yanık and Abdelrahman Elmorsy
The purpose of this paper is to generate customer clusters using self-organizing map (SOM) approach, a machine learning technique with a big data set of credit card consumptions…
Abstract
Purpose
The purpose of this paper is to generate customer clusters using self-organizing map (SOM) approach, a machine learning technique with a big data set of credit card consumptions. The authors aim to use the consumption patterns of the customers in a period of three months deducted from the credit card transactions, specifically the consumption categories (e.g. food, entertainment, etc.).
Design/methodology/approach
The authors use a big data set of almost 40,000 credit card transactions to cluster customers. To deal with the size of the data set and the eliminated the required parametric assumptions the authors use a machine learning technique, SOMs. The variables used are grouped into three as demographical variables, categorical consumption variables and summary consumption variables. The variables are first converted to factors using principal component analysis. Then, the number of clusters is specified by k-means clustering trials. Then, clustering with SOM is conducted by only including the demographical variables and all variables. Then, a comparison is made and the significance of the variables is examined by analysis of variance.
Findings
The appropriate number of clusters is found to be 8 using k-means clusters. Then, the differences in categorical consumption levels are investigated between the clusters. However, they have been found to be insignificant, whereas the summary consumption variables are found to be significant between the clusters, as well as the demographical variables.
Originality/value
The originality of the study is to incorporate the credit card consumption variables of customers to cluster the bank customers. The authors use a big data set and dealt with it with a machine learning technique to deduct the consumption patterns to generate the clusters. Credit card transactions generate a vast amount of data to deduce valuable information. It is mainly used to detect fraud in the literature. To the best of the authors’ knowledge, consumption patterns obtained from credit card transaction are first used for clustering the customers in this study.