Mengni Zhang, Can Wang, Jiajun Bu, Liangcheng Li and Zhi Yu
As existing studies show the accuracy of sampling methods depends heavily on the evaluation metric in web accessibility evaluation, the purpose of this paper is to propose a…
Abstract
Purpose
As existing studies show the accuracy of sampling methods depends heavily on the evaluation metric in web accessibility evaluation, the purpose of this paper is to propose a sampling method OPS-WAQM optimized for Web Accessibility Quantitative Metric (WAQM). Furthermore, to support quick accessibility evaluation or real-time website accessibility monitoring, the authors also provide online extension for the sampling method.
Design/methodology/approach
In the OPS-WAQM method, the authors propose a minimal sampling error model for WAQM and use a greedy algorithm to approximately solve the optimization problem to determine the sample numbers in different layers. To make OPS-WAQM online, the authors apply the sampling in crawling strategy.
Findings
The sampling method OPS-WAQM and its online extension can both achieve good sampling quality by choosing the optimal sample numbers in different layers. Moreover, the online extension can also support quick accessibility evaluation by sampling and evaluating the pages in crawling.
Originality/value
To the best of the authors’ knowledge, the sampling method OPS-WAQM in this paper is the first attempt to optimize for a specific evaluation metric. Meanwhile, the online extension not only greatly reduces the serious I/O issues in existing web accessibility evaluation, but also supports quick web accessibility evaluation by sampling in crawling.
Details
Keywords
Mohamed Hammami, Radhouane Guermazi and Abdelmajid Ben Hamadou
The growth of the web and the increasing number of documents electronically available has been paralleled by the emergence of harmful web pages content such as pornography…
Abstract
Purpose
The growth of the web and the increasing number of documents electronically available has been paralleled by the emergence of harmful web pages content such as pornography, violence, racism, etc. This emergence involved the necessity of providing filtering systems designed to secure the internet access. Most of them process mainly the adult content and focus on blocking pornography, marginalizing violence. The purpose of this paper is to propose a violent web content detection and filtering system, which uses textual and structural content‐based analysis.
Design/methodology/approach
The violent web content detection and filtering system uses textual and structural content‐based analysis based on a violent keyword dictionary. The paper focuses on the keyword dictionary preparation, and presents a comparative study of different data mining techniques to block violent content web pages.
Findings
The solution presented in this paper showed its effectiveness by scoring a 89 per cent classification accuracy rate on its test data set.
Research limitations/implications
Many future work directions can be considered. This paper analyzed only the web page, and an additional analysis of the visual content can be one of the directions of future work. Future research is underway to develop effective filtering tools for other types of harmful web pages, such as racist, etc.
Originality/value
The paper's major contributions are first, the study and comparison of several decision tree building algorithms to build a violent web classifier based on a textual and structural content‐based analysis for improving web filtering. Second, showing laborious dictionary building by finding automatically discriminative indicative keywords.
Details
Keywords
Jon Ezell, J.J. Pionke and Jeremy Gunnoe
This paper aims to contribute to an understanding of current accessibility efforts and practice in librarianship by providing a broad overview of the information about services…
Abstract
Purpose
This paper aims to contribute to an understanding of current accessibility efforts and practice in librarianship by providing a broad overview of the information about services, resources and facilities on academic library accessibility pages. By compiling and analyzing data from 85 libraries, this study seeks to facilitate comparisons between current and past accessibility practice and to provide perspective on how libraries communicate to users about accessibility efforts across libraries.
Design/methodology/approach
The authors conducted a content analysis of 85 library accessibility pages from a sample population of 98 institutions, consisting of all members institutions of four US academic library consortia. Pages were coded for content elements regarding services, facilities, collections, staffing, assistive technologies and general information. Webpage features, architecture and accessibility/functionality were also assessed.
Findings
Libraries have broadened and strengthened efforts to publicize/provide services and resources to functionally diverse users. Pages most commonly prioritize information about assistive technologies, services and facilities. Pages varied greatly in size, complexity and detail, but public institutions' pages were more prevalent and informative than their private counterparts. Libraries can work to foreground accessibility pages and increase transparency and evidence of currency to improve communication to their users.
Originality/value
This study provides a large-scale content analysis of library accessibility webpages. It allows for comparison of the features and information most commonly featured on these important online points of service.
Details
Keywords
Joaquim Gabarró, Isabel Vallejo and Fatos Xhafa
This paper aims to deal with some design issues of web applications using partial orders to enhance their navigability and extensibility.
Abstract
Purpose
This paper aims to deal with some design issues of web applications using partial orders to enhance their navigability and extensibility.
Design/methodology/approach
The paper uses a static web applications model as a deterministic labelled transition system in which states are html pages and transitions are urls.
Findings
By using this model it is possible, on the one hand, to characterize the temporal evolution of a web application and, on the other, to classify web applications into several types according to the way the information is organized over the web application. This classification captures interesting properties related to the navigability and extensibility of web applications.
Practical implications
These ideas are applied to develop a simple web application, namely, a small virtual museum based on approximations of original paintings. Moreover, based on the extensibility characterization, the virtual museum is extended with different paintings approximations, while preserving navigability properties as well as browsing of paintings' approximations of higher quality resolution.
Originality/value
The results of this work provide useful and practical insights into the design of web applications that ensure navigability and extensibility properties.
Details
Keywords
This study aims to propose an experiential model of consumer engagement focusing on Facebook brand pages. Building on the brand experience literature, the study synthesizes the…
Abstract
Purpose
This study aims to propose an experiential model of consumer engagement focusing on Facebook brand pages. Building on the brand experience literature, the study synthesizes the experiential affordances of Facebook brand pages along perceptual, social, epistemic and embodied dimensions and tests their impact on consumer engagement.
Design/methodology/approach
The study operationalized key variables of the proposed model at the brand page level and assembled pertinent data, using systematic content analysis, on a sample of Facebook brand pages (n = 85). Poisson regression tested the proposed model.
Findings
The findings indicate that brands that facilitate greater number of experiential affordances on their Facebook brand pages generated higher levels of consumer engagement. For both brand post likes and brand post shares, the contributions of experiential affordances were significant and positive.
Practical implications
The findings offer actionable managerial insights for brands seeking to implement an experiential model of consumer engagement on their fan pages.
Originality/value
This study contributes to the literature by proposing and testing an experiential model of consumer engagement in the context of Facebook brand pages. To date, the experiential value of Facebook brand pages has rarely, if at all, been tested in an empirical study.
Details
Keywords
The NP‐complete problem of optimally placing tuples on a hierarchy of secondary storage devices is considered using a heuristic approach. From load specification details captured…
Abstract
The NP‐complete problem of optimally placing tuples on a hierarchy of secondary storage devices is considered using a heuristic approach. From load specification details captured at database design time, those tuples associated with queries which merit tailored, “set‐in‐concrete”, physical access paths are placed using a two‐level graph partitioning algorithm. Experiments are reported with the pages and cylinders as the two hierarchical levels of storage for a centralised database, but the technique is applicable to an n‐Ievel storage hierarchy—as up to the “different sites” level for distributed databases. The results show up to 39% improvement over single‐level partitioning algorithms for the database considered.
John Garofalakis Panagiotis, Panagiotis Kappos and Christos Makris
Considers the problem of improving the performance of Web access by proposing a reconstruction of the internal link structure of a Web site in order to match the quality of the…
Abstract
Considers the problem of improving the performance of Web access by proposing a reconstruction of the internal link structure of a Web site in order to match the quality of the pages (measured in terms of their link importance in the Web space – global ranking) with the popularity of the pages (measured in terms of their importance recognized by Web users – local metrics). Provides a set of simple algorithms for local reorganization of a Web site, which results in improving users’ access to quality pages in an easy and quick way.
Details
Keywords
Over the past two decades, online booking has become a predominant distribution channel of tourism products. As online sales have become more important, understanding booking…
Abstract
Purpose
Over the past two decades, online booking has become a predominant distribution channel of tourism products. As online sales have become more important, understanding booking conversion behavior remains a critical topic in the tourism industry. The purpose of this study is to model airline search and booking activities of anonymous visitors.
Design/methodology/approach
This study proposes a stochastic approach to explicitly model dynamics of airline customers’ search, revisit and booking activities. A Markov chain model simultaneously captures transition probabilities and the timing of search, revisit and booking decisions. The suggested model is demonstrated on clickstream data from an airline booking website.
Findings
Empirical results show that low prices (captured as discount rates) lead to not only booking propensities but also overall stickiness to a website, increasing search and revisit probabilities. From the decision timing of search and revisit activities, the author observes customers’ learning effect on browsing time and heterogeneous intentions of website visits.
Originality/value
This study presents both theoretical and managerial implications of online search and booking behavior for airline and tourism marketing. The dynamic Markov chain model provides a systematic framework to predict online search, revisit and booking conversion and the time of the online activities.
Details
Keywords
Wan‐Shiou Yang and Yuan‐Shuenn Jan
Web content has been widely used for recommending personal webpages. Despite its popularity, the content‐based approach regards a webpage simply as a piece of text, thereby often…
Abstract
Purpose
Web content has been widely used for recommending personal webpages. Despite its popularity, the content‐based approach regards a webpage simply as a piece of text, thereby often resulting in less authoritative recommendations of webpages. This paper aims to propose novel approaches that utilise other sources of information pertaining to webpages to facilitate the automatic construction of an authoritative web recommender system.
Design/methodology/approach
In this research, four approaches that exploit hyperlink structure, web content and web‐usage logs for making recommendations are proposed. The proposed approaches have been implemented as a prototype system, called the authoritative web recommender (AWR) system. An evaluation using the web‐usage logs and the corresponding pages of a university web site was performed.
Findings
The results from the evaluations using empirical data demonstrate that the four proposed approaches outperform the traditional content‐only approach.
Originality/value
This paper describes a novel way to combine information retrieval, usage mining and hyperlink structure analysis techniques to find relevant and authoritative webpages for recommendation.