Search results
1 – 4 of 4Amir Hosein Keyhanipour and Farhad Oroumchian
User feedback inferred from the user's search-time behavior could improve the learning to rank (L2R) algorithms. Click models (CMs) present probabilistic frameworks for describing…
Abstract
Purpose
User feedback inferred from the user's search-time behavior could improve the learning to rank (L2R) algorithms. Click models (CMs) present probabilistic frameworks for describing and predicting the user's clicks during search sessions. Most of these CMs are based on common assumptions such as Attractiveness, Examination and User Satisfaction. CMs usually consider the Attractiveness and Examination as pre- and post-estimators of the actual relevance. They also assume that User Satisfaction is a function of the actual relevance. This paper extends the authors' previous work by building a reinforcement learning (RL) model to predict the relevance. The Attractiveness, Examination and User Satisfaction are estimated using a limited number of the features of the utilized benchmark data set and then they are incorporated in the construction of an RL agent. The proposed RL model learns to predict the relevance label of documents with respect to a given query more effectively than the baseline RL models for those data sets.
Design/methodology/approach
In this paper, User Satisfaction is used as an indication of the relevance level of a query to a document. User Satisfaction itself is estimated through Attractiveness and Examination, and in turn, Attractiveness and Examination are calculated by the random forest algorithm. In this process, only a small subset of top information retrieval (IR) features are used, which are selected based on their mean average precision and normalized discounted cumulative gain values. Based on the authors' observations, the multiplication of the Attractiveness and Examination values of a given query–document pair closely approximates the User Satisfaction and hence the relevance level. Besides, an RL model is designed in such a way that the current state of the RL agent is determined by discretization of the estimated Attractiveness and Examination values. In this way, each query–document pair would be mapped into a specific state based on its Attractiveness and Examination values. Then, based on the reward function, the RL agent would try to choose an action (relevance label) which maximizes the received reward in its current state. Using temporal difference (TD) learning algorithms, such as Q-learning and SARSA, the learning agent gradually learns to identify an appropriate relevance label in each state. The reward that is used in the RL agent is proportional to the difference between the User Satisfaction and the selected action.
Findings
Experimental results on MSLR-WEB10K and WCL2R benchmark data sets demonstrate that the proposed algorithm, named as SeaRank, outperforms baseline algorithms. Improvement is more noticeable in top-ranked results, which usually receive more attention from users.
Originality/value
This research provides a mapping from IR features to the CM features and thereafter utilizes these newly generated features to build an RL model. This RL model is proposed with the definition of the states, actions and reward function. By applying TD learning algorithms, such as the Q-learning and SARSA, within several learning episodes, the RL agent would be able to learn how to choose the most appropriate relevance label for a given pair of query–document.
Details
Keywords
This study aims to introduce a novel rank aggregation algorithm that leverages graph theory and deep-learning to improve the accuracy and relevance of aggregated rankings in…
Abstract
Purpose
This study aims to introduce a novel rank aggregation algorithm that leverages graph theory and deep-learning to improve the accuracy and relevance of aggregated rankings in metasearch scenarios, particularly when faced with inconsistent and low-quality rank lists. By strategically selecting a subset of base rankers, the algorithm enhances the quality of the aggregated ranking while using only a subset of base rankers.
Design/methodology/approach
The proposed algorithm leverages a graph-based model to represent the interrelationships between base rankers. By applying Spectral clustering, the algorithm identifies a subset of top-performing base rankers based on their retrieval effectiveness. These selected rankers are then integrated into a sequential deep-learning model to estimate relevance labels for query-document pairs.
Findings
Empirical evaluation on the MQ2007-agg and MQ2008-agg data sets demonstrates the substantial performance gains achieved by the proposed algorithm compared to baseline methods, with an average improvement of 8.7% in MAP and 11.9% in NDCG@1. The algorithm’s effectiveness can be attributed to its ability to effectively integrate diverse perspectives from base rankers and capture complex relationships within the data.
Originality/value
This research presents a novel approach to rank aggregation that integrates graph theory and deep-learning. The author proposes a graph-based model to select the most effective subset for metasearch applications by constructing a similarity graph of base rankers. This innovative method addresses the challenges posed by inconsistent and low-quality rank lists, offering a unique solution to the problem.
Details
Keywords
Amir Hosein Keyhanipour, Behzad Moshiri, Maryam Piroozmand, Farhad Oroumchian and Ali Moeini
Learning to rank algorithms inherently faces many challenges. The most important challenges could be listed as high-dimensionality of the training data, the dynamic nature of Web…
Abstract
Purpose
Learning to rank algorithms inherently faces many challenges. The most important challenges could be listed as high-dimensionality of the training data, the dynamic nature of Web information resources and lack of click-through data. High dimensionality of the training data affects effectiveness and efficiency of learning algorithms. Besides, most of learning to rank benchmark datasets do not include click-through data as a very rich source of information about the search behavior of users while dealing with the ranked lists of search results. To deal with these limitations, this paper aims to introduce a novel learning to rank algorithm by using a set of complex click-through features in a reinforcement learning (RL) model. These features are calculated from the existing click-through information in the data set or even from data sets without any explicit click-through information.
Design/methodology/approach
The proposed ranking algorithm (QRC-Rank) applies RL techniques on a set of calculated click-through features. QRC-Rank is as a two-steps process. In the first step, Transformation phase, a compact benchmark data set is created which contains a set of click-through features. These feature are calculated from the original click-through information available in the data set and constitute a compact representation of click-through information. To find most effective click-through feature, a number of scenarios are investigated. The second phase is Model-Generation, in which a RL model is built to rank the documents. This model is created by applying temporal difference learning methods such as Q-Learning and SARSA.
Findings
The proposed learning to rank method, QRC-rank, is evaluated on WCL2R and LETOR4.0 data sets. Experimental results demonstrate that QRC-Rank outperforms the state-of-the-art learning to rank methods such as SVMRank, RankBoost, ListNet and AdaRank based on the precision and normalized discount cumulative gain evaluation criteria. The use of the click-through features calculated from the training data set is a major contributor to the performance of the system.
Originality/value
In this paper, we have demonstrated the viability of the proposed features that provide a compact representation for the click through data in a learning to rank application. These compact click-through features are calculated from the original features of the learning to rank benchmark data set. In addition, a Markov Decision Process model is proposed for the learning to rank problem using RL, including the sets of states, actions, rewarding strategy and the transition function.
Details
Keywords
Amir Hosein Keyhanipour and Farhad Oroumchian
Incorporating users’ behavior patterns could help in the ranking process. Different click models (CMs) are introduced to model the sophisticated search-time behavior of users…
Abstract
Purpose
Incorporating users’ behavior patterns could help in the ranking process. Different click models (CMs) are introduced to model the sophisticated search-time behavior of users among which commonly used the triple of attractiveness, examination and satisfaction. Inspired by this fact and considering the psychological definitions of these concepts, this paper aims to propose a novel learning to rank by redefining these concepts. The attractiveness and examination factors could be calculated using a limited subset of information retrieval (IR) features by the random forest algorithm, and then they are combined with each other to predicate the satisfaction factor which is considered as the relevance level.
Design/methodology/approach
The attractiveness and examination factors of a given document are usually considered as its perceived relevance and the fast scan of its snippet, respectively. Here, attractiveness and examination factors are regarded as the click-count and the investigation rate, respectively. Also, the satisfaction of a document is supposed to be the same as its relevance level for a given query. This idea is supported by the strong correlation between attractiveness-satisfaction and the examination-satisfaction. Applying random forest algorithm, the attractiveness and examination factors are calculated using a very limited set of the primitive features of query-document pairs. Then, by using the ordered weighted averaging operator, these factors are aggregated to estimate the satisfaction.
Findings
Experimental results on MSLR-WEB10K and WCL2R data sets show the superiority of this algorithm over the state-of-the-art ranking algorithms in terms of P@n and NDCG criteria. The enhancement is more noticeable in top-ranked items which are reviewed more by the users.
Originality/value
This paper proposes a novel learning to rank based on the redefinition of major building blocks of the CMs which are the attractiveness, examination and satisfactory. It proposes a method to use a very limited number of selected IR features to estimate the attractiveness and examination factors and then combines these factors to predicate the satisfactory which is regarded as the relevance level of a document with respect to a given query.
Details