Search results

1 – 3 of 3
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 31 May 2019

Melanie A. Kilian, Markus Kattenbeck, Matthias Ferstl, Bernd Ludwig and Florian Alt

Performing tasks in public spaces can be demanding due to task complexity. Systems that can keep track of the current task state may help their users to successfully fulfill a…

687

Abstract

Purpose

Performing tasks in public spaces can be demanding due to task complexity. Systems that can keep track of the current task state may help their users to successfully fulfill a task. These systems, however, require major implementation effort. The purpose of this paper is to investigate if and how a mobile information assistant which has only basic task-tracking capabilities can support users by employing a least effort approach. This means, we are interested in whether such a system is able to have an impact on the way a workflow in public space is perceived.

Design/methodology/approach

The authors implement and test AIRBOT, a mobile chatbot application that can assist air passengers in successfully boarding a plane. The authors apply a three-tier approach and, first, conduct expert and passenger interviews to understand the workflow and the information needs occurring therein; second, the authors implement a mobile chatbot application providing minimum task-tracking capabilities to support travelers by providing boarding-relevant information in a proactive manner. Finally, the authors evaluate this application by means of an in situ study (n = 101 passengers) at a major European airport.

Findings

The authors provide evidence that basic task-tracking capabilities are sufficient to affect the users’ task perception. AIRBOT is able to decrease the perceived workload airport services impose on users. It has a negative impact on satisfaction with non-personalized information offered by the airport, though.

Originality/value

The study shows that the number of features is not the most important means to successfully provide assistance in public space workflows. The study can, moreover, serve as a blueprint to design task-based assistants for other contexts.

Details

Aslib Journal of Information Management, vol. 71 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Access Restricted. View access options
Article
Publication date: 20 March 2019

Markus Kattenbeck and David Elsweiler

It is well known that information behaviour can be biased in countless ways and that users of web search engines have difficulty in assessing the credibility of results. Yet…

2073

Abstract

Purpose

It is well known that information behaviour can be biased in countless ways and that users of web search engines have difficulty in assessing the credibility of results. Yet, little is known about how search engine result page (SERP) listings are used to judge credibility and in which if any way such judgements are biased. The paper aims to discuss these issues.

Design/methodology/approach

Two studies are presented. The first collects data by means of a controlled, web-based user study (N=105). Studying judgements for three controversial topics, the paper examines the extent to which users agree on credibility, the extent to which judgements relate to those applied by objective assessors and to what extent judgements can be predicted by the users’ position on and prior knowledge of the topic. A second, qualitative study (N=9) utilises the same setup; however, transcribed think-aloud protocols provide an understanding of the cues participants use to estimate credibility.

Findings

The first study reveals that users are very uncertain when assessing credibility and their impressions often diverge from objective judges who have fact checked the sources. Little evidence is found indicating that judgements are biased by prior beliefs or knowledge, but differences are observed in the accuracy of judgements across topics. Qualitatively analysing think-aloud transcripts from participants think-aloud reveals ten categories of cues, which participants used to determine the credibility of results. Despite short listings, participants utilised diverse cues for the same listings. Even when the same cues were identified and utilised, different participants often interpreted these differently. Example transcripts show how participants reach varying conclusions, illustrate common mistakes made and highlight problems with existing SERP listings.

Originality/value

This study offers a novel perspective on how the credibility of SERP listings is interpreted when assessing search results. Especially striking is how the same short snippets provide diverse informational cues and how these cues can be interpreted differently depending on the user and his or her background. This finding is significant in terms of how search engine results should be presented and opens up the new challenge of discovering technological solutions, which allow users to better judge the credibility of information sources on the web.

Details

Aslib Journal of Information Management, vol. 71 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Available. Content available
Article
Publication date: 13 June 2019

Joachim Griesbaum, Dirk Lewandowski and Isabella Peters

4012

Abstract

Details

Aslib Journal of Information Management, vol. 71 no. 3
Type: Research Article
ISSN: 2050-3806

1 – 3 of 3
Per page
102050