Search results
1 – 5 of 5This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current…
Abstract
Purpose
This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current form as complex technology-powered systems that offer a wide range of features and services.
Design/methodology/approach
In recent years, advancements in artificial intelligence (AI) technology have led to the development of AI-powered chat services. This study explores official announcements and releases of three major search engines, Google, Bing and Baidu, of AI-powered chat services.
Findings
Three major players in the search engine market, Google, Microsoft and Baidu started to integrate AI chat into their search results. Google has released Bard, later upgraded to Gemini, a LaMDA-powered conversational AI service. Microsoft has launched Bing Chat, renamed later to Copilot, a GPT-powered by OpenAI search engine. The largest search engine in China, Baidu, released a similar service called Ernie. There are also new AI-based search engines, which are briefly described.
Originality/value
This paper discusses the strengths and weaknesses of the traditional – algorithmic powered search engines and modern search with generative AI support, and the possibilities of merging them into one service. This study stresses the types of inquiries provided to search engines, users’ habits of using search engines and the technological advantage of search engine infrastructure.
Details
Keywords
Artur Strzelecki and Mariia Rizun
This paper aims to consider the question of changes brought to consumers’ trust and security issues by the implementation of the General Data Protection Regulation (GDPR) in…
Abstract
Purpose
This paper aims to consider the question of changes brought to consumers’ trust and security issues by the implementation of the General Data Protection Regulation (GDPR) in electronic commerce.
Design/methodology/approach
Online shopping policies in Poland and Ukraine are compared from the perspective of four factors as follows: application of terms of service and privacy policy, usage of online payment systems, presence in price comparison engines and grade of secure sockets layer security certificates. Comparison is conducted within the framework of three research questions (complemented by eight hypotheses) set to reveal whether: policies of personal data protection and server security for online stores in both countries are the same; all online stores in both countries obey the existing e-commerce rules; e-commerce policies in the two countries differ significantly. The sample for analysis contains 40 Polish and 40 Ukrainian online stores, representing four industries, namely, electronics, entertainment, fashion and goods for children.
Findings
The research allowed to reveal major differences in the privacy policy of the two countries, caused, mainly, by the absence of GDPR in Ukraine. It also disclosed much stronger cooperation of online stores and price comparison engines in Poland compared to Ukraine. At the same time, research results allow to state that server security in both countries is on the same rather high level and that online stores use transparent and safe methods of online payment.
Research limitations/implications
This research opens a way to other, expanded observations which will include more countries and larger scopes of data. Its main limitation is that GDPR influence is only studied in two countries, not in all countries where it is implemented.
Originality/value
This research contributes from security and trust perspectives by analyzing the situation in two countries as follows: the EU member (Poland) and a non-EU country (Ukraine). The value of exploring the situation of Ukrainian e-commerce consists of understanding how online stores function without implementing the GDPR. Observation of shopbots application allows drawing an important conclusion of the necessity for online stores to cooperate with such services. It was also revealed that consumers’ trust in both countries depends a lot on the payment methods applied by an online store and on the ease of use of these methods.
Details
Keywords
Artur Strzelecki and Andrej Miklosik
The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method…
Abstract
Purpose
The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method for accessing data from the Google search engine using programmatic access and calculating CTR values from the retrieved data to show how the CTRs have changed since the last studies were published.
Design/methodology/approach
In this study, the authors present the estimated CTR values in organic search results based on actual clicks and impressions data, and establish a protocol for collecting this data using Google programmatic access. For this study, the authors collected data on 416,386 clicks, 31,648,226 impressions and 8,861,416 daily queries.
Findings
The results show that CTRs have decreased from previously reported values in both academic research and industry benchmarks. The estimates indicate that the top-ranked result in Google's organic search results features a CTR of 9.28%, followed by 5.82 and 3.11% for positions two and three, respectively. The authors also demonstrate that CTRs vary across various types of devices. On desktop devices, the CTR decreases steadily with each lower ranking position. On smartphones, the CTR starts high but decreases rapidly, with an unprecedented increase from position 13 onwards. Tablets have the lowest and most variable CTR values.
Practical implications
The theoretical implications include the generation of a current dataset on search engine results and user behavior, made available to the research community, creation of a unique methodology for generating new datasets and presenting the updated information on CTR trends. The managerial implications include the establishment of the need for businesses to focus on optimizing other forms of Google search results in addition to organic text results, and the possibility of application of this study's methodology to determine CTRs for their own websites.
Originality/value
This study provides a novel method to access real CTR data and estimates current CTRs for top organic Google search results, categorized by device.
Details
Keywords
Petros Kostagiolas, Artur Strzelecki, Christina Banou and Charilaos Lavranos
The purpose of this paper is to discuss Google visibility of five large STM publishers (Elsevier, Emerald Publishing, Springer, Taylor & Francis and John Wiley & Sons) with the…
Abstract
Purpose
The purpose of this paper is to discuss Google visibility of five large STM publishers (Elsevier, Emerald Publishing, Springer, Taylor & Francis and John Wiley & Sons) with the aim to focus on and investigate various upcoming current issues and challenges of the publishing industry regarding discoverability, promotion strategies, competition, information-seeking behavior and the impact of new information technologies on scholarly information.
Design/methodology/approach
The study is based on data retrieved through two commercial online tools specialized in retrieving and saving the data of the domain's visibility in search engines: SEMrush (“SEMrush – Online Visibility Management Platform”) and Ahrefs (“Ahrefs – SEO Tools & Resources To Grow Your Search Traffic”). All data gathering took place between April 15 and the May 29, 2019.
Findings
The study exhibits the significance of Google visibility in the STM publishing industry taking into consideration current issues and challenges of the publishing activity.
Originality/value
This is a “new” trend, certainly of great significance in the publishing industry. The research is conducted in this paper and the theoretical background will be offered to the study of this issue.
Details
Keywords
The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so…
Abstract
Purpose
The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results.
Design/methodology/approach
Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD).
Findings
Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports).
Research limitations/implications
As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered.
Originality/value
This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.
Details