Search results

1 – 10 of over 1000
Article
Publication date: 1 December 2023

Andreas Skalkos, Aggeliki Tsohou, Maria Karyda and Spyros Kokolakis

Search engines, the most popular online services, are associated with several concerns. Users are concerned about the unauthorized processing of their personal data, as well as…

Abstract

Purpose

Search engines, the most popular online services, are associated with several concerns. Users are concerned about the unauthorized processing of their personal data, as well as about search engines keeping track of their search preferences. Various search engines have been introduced to address these concerns, claiming that they protect users’ privacy. The authors call these search engines privacy-preserving search engines (PPSEs). This paper aims to investigate the factors that motivate search engine users to use PPSEs.

Design/methodology/approach

This study adopted protection motivation theory (PMT) and associated its constructs with subjective norms to build a comprehensive research model. The authors tested the research model using survey data from 830 search engine users worldwide.

Findings

The results confirm the interpretive power of PMT in privacy-related decision-making and show that users are more inclined to take protective measures when they consider that data abuse is a more severe risk and that they are more vulnerable to data abuse. Furthermore, the results highlight the importance of subjective norms in predicting and determining PPSE use. Because subjective norms refer to perceived social influences from important others to engage or refrain from protective behavior, the authors reveal that the recommendation from people that users consider important motivates them to take protective measures and use PPSE.

Research limitations/implications

Despite its interesting results, this research also has some limitations. First, because the survey was conducted online, the study environment was less controlled. Participants may have been disrupted or affected, for example, by the presence of others or background noise during the session. Second, some of the survey items could possibly be misinterpreted by the respondents in the study questionnaire, as they did not have access to clarifications that a researcher could possibly provide. Third, another limitation refers to the use of the Amazon Turk tool. According Paolacci and Chandler (2014) in comparison to the US population, the MTurk workers are more educated, younger and less religiously and politically diverse. Fourth, another limitation of this study could be that Actual Use of PPSE is self-reported by the participants. This could cause bias because it is argued that internet users’ statements may be in contrast with their actions in real life or in an experimental scenario (Berendt et al., 2005, Jensen et al., 2005); Moreover, some limitations of this study emerge from the use of PMT as the background theory of the study. PMT identifies the main factors that affect protection motivation, but other environmental and cognitive factors can also have a significant role in determining the way an individual’s attitude is formed. As Rogers (1975) argued, PMT as proposed does not attempt to specify all of the possible factors in a fear appeal that may affect persuasion, but rather a systematic exposition of a limited set of components and cognitive mediational processes that may account for a significant portion of the variance in acceptance by users. In addition, as Tanner et al. (1991) argue, the ‘PMT’s assumption that the subjects have not already developed a coping mechanism is one of its limitations. Finally, another limitation is that the sample does not include users from China, which is the second most populated country. Unfortunately, DuckDuckGo has been blocked in China, so it has not been feasible to include users from China in this study.

Practical implications

The proposed model and, specifically, the subjective norms construct proved to be successful in predicting PPSE use. This study demonstrates the need for PPSE to exhibit and advertise the technology and measures they use to protect users’ privacy. This will contribute to the effort to persuade internet users to use these tools.

Social implications

This study sought to explore the privacy attitudes of search engine users using PMT and its constructs’ association with subjective norms. It used the PMT to elucidate users’ perceptions that motivate them to privacy adoption behavior, as well as how these perceptions influence the type of search engine they use. This research is a first step toward gaining a better understanding of the processes that drive people’s motivation to, or not to, protect their privacy online by means of using PPSE. At the same time, this study contributes to search engine vendors by revealing that users’ need to be persuaded not only about their policy toward privacy but also by considering and implementing new strategies of diffusion that could enhance the use of the PPSE.

Originality/value

This research is a first step toward gaining a better understanding of the processes that drive people’s motivation to, or not to, protect their privacy online by means of using PPSEs.

Details

Information & Computer Security, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 4 April 2024

Artur Strzelecki

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current…

Abstract

Purpose

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current form as complex technology-powered systems that offer a wide range of features and services.

Design/methodology/approach

In recent years, advancements in artificial intelligence (AI) technology have led to the development of AI-powered chat services. This study explores official announcements and releases of three major search engines, Google, Bing and Baidu, of AI-powered chat services.

Findings

Three major players in the search engine market, Google, Microsoft and Baidu started to integrate AI chat into their search results. Google has released Bard, later upgraded to Gemini, a LaMDA-powered conversational AI service. Microsoft has launched Bing Chat, renamed later to Copilot, a GPT-powered by OpenAI search engine. The largest search engine in China, Baidu, released a similar service called Ernie. There are also new AI-based search engines, which are briefly described.

Originality/value

This paper discusses the strengths and weaknesses of the traditional – algorithmic powered search engines and modern search with generative AI support, and the possibilities of merging them into one service. This study stresses the types of inquiries provided to search engines, users’ habits of using search engines and the technological advantage of search engine infrastructure.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 10 January 2024

Artur Strzelecki and Andrej Miklosik

The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method…

56

Abstract

Purpose

The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method for accessing data from the Google search engine using programmatic access and calculating CTR values from the retrieved data to show how the CTRs have changed since the last studies were published.

Design/methodology/approach

In this study, the authors present the estimated CTR values in organic search results based on actual clicks and impressions data, and establish a protocol for collecting this data using Google programmatic access. For this study, the authors collected data on 416,386 clicks, 31,648,226 impressions and 8,861,416 daily queries.

Findings

The results show that CTRs have decreased from previously reported values in both academic research and industry benchmarks. The estimates indicate that the top-ranked result in Google's organic search results features a CTR of 9.28%, followed by 5.82 and 3.11% for positions two and three, respectively. The authors also demonstrate that CTRs vary across various types of devices. On desktop devices, the CTR decreases steadily with each lower ranking position. On smartphones, the CTR starts high but decreases rapidly, with an unprecedented increase from position 13 onwards. Tablets have the lowest and most variable CTR values.

Practical implications

The theoretical implications include the generation of a current dataset on search engine results and user behavior, made available to the research community, creation of a unique methodology for generating new datasets and presenting the updated information on CTR trends. The managerial implications include the establishment of the need for businesses to focus on optimizing other forms of Google search results in addition to organic text results, and the possibility of application of this study's methodology to determine CTRs for their own websites.

Originality/value

This study provides a novel method to access real CTR data and estimates current CTRs for top organic Google search results, categorized by device.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 29 November 2023

Emine Sendurur and Sonja Gabriel

This study aims to discover how domain familiarity and language affect the cognitive load and the strategies applied for the evaluation of search engine results pages (SERP).

Abstract

Purpose

This study aims to discover how domain familiarity and language affect the cognitive load and the strategies applied for the evaluation of search engine results pages (SERP).

Design/methodology/approach

This study used an experimental research design. The pattern of the experiment was based upon repeated measures design. Each student was given four SERPs varying in two dimensions: language and content. The criteria of students to decide on the three best links within the SERP, the reasoning behind their selection, and their perceived cognitive load of the given task were the repeated measures collected from each participant.

Findings

The evaluation criteria changed according to the language and task type. The cognitive load was reported higher when the content was presented in English or when the content was academic. Regarding the search strategies, a majority of students trusted familiar sources or relied on keywords they found in the short description of the links. A qualitative analysis showed that students can be grouped into different types according to the reasons they stated for their choices. Source seeker, keyword seeker and specific information seeker were the most common types observed.

Originality/value

This study has an international scope with regard to data collection. Moreover, the tasks and findings contribute to the literature on information literacy.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 2 May 2023

Carlos Lopezosa, Dimitrios Giomelakis, Leyberson Pedrosa and Lluís Codina

This paper constitutes the first academic study to be made of Google Discover as applied to online journalism.

Abstract

Purpose

This paper constitutes the first academic study to be made of Google Discover as applied to online journalism.

Design/methodology/approach

This paper constitutes the first academic study to be made of Google Discover as applied to online journalism. The study involved conducting 61 semi-structured interviews with experts that are representative of a range of different professional profiles within the fields of journalism and search engine positioning (SEO) in Brazil, Spain and Greece. Based on the data collected, the authors created five semantic categories and compared the experts' perceptions in order to detect common response patterns.

Findings

This study results confirm the existence of different degrees of convergence and divergence in the opinions expressed in these three countries regarding the main dimensions of Google Discover, including specific strategies using the feed, its impact on web traffic, its impact on both quality and sensationalist content and on the degree of responsibility shown by the digital media in its use. The authors are also able to propose a set of best practices that journalists and digital media in-house web visibility teams should take into account to increase their probability of appearing in Google Discover. To this end, the authors consider strategies in the following areas of application: topics, different aspects of publication, elements of user experience, strategic analysis and diffusion and marketing.

Originality/value

Although research exists on the application of SEO to different areas, there have not, to date, been any studies examining Google Discover.

Peer review

The peer-review history for this article is available at: https://publons.com/publon/10.1108/OIR-10-2022-0574

Details

Online Information Review, vol. 48 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 30 October 2023

Koraljka Golub, Xu Tan, Ying-Hsang Liu and Jukka Tyrkkö

This exploratory study aims to help contribute to the understanding of online information search behaviour of PhD students from different humanities fields, with a focus on…

Abstract

Purpose

This exploratory study aims to help contribute to the understanding of online information search behaviour of PhD students from different humanities fields, with a focus on subject searching.

Design/methodology/approach

The methodology is based on a semi-structured interview within which the participants are asked to conduct both a controlled search task and a free search task. The sample comprises eight PhD students in several humanities disciplines at Linnaeus University, a medium-sized Swedish university from 2020.

Findings

Most humanities PhD students in the study have received training in information searching, but it has been too basic. Most rely on web search engines like Google and Google Scholar for publications' search, and university's discovery system for known-item searching. As these systems do not rely on controlled vocabularies, the participants often struggle with too many retrieved documents that are not relevant. Most only rarely or never use disciplinary bibliographic databases. The controlled search task has shown some benefits of using controlled vocabularies in the disciplinary databases, but incomplete synonym or concept coverage as well as user unfriendly search interface present hindrances.

Originality/value

The paper illuminates an often-forgotten but pervasive challenge of subject searching, especially for humanities researchers. It demonstrates difficulties and shows how most PhD students have missed finding an important resource in their research. It calls for the need to reconsider training in information searching and the need to make use of controlled vocabularies implemented in various search systems with usable search and browse user interfaces.

Article
Publication date: 27 October 2022

Maryam Tavosi and Nader Naghshineh

This study aims to present a comparative study of university library websites (in the USA) from the standpoint of “Google SEO” and “Accessibility”. Furthermore, correlation…

Abstract

Purpose

This study aims to present a comparative study of university library websites (in the USA) from the standpoint of “Google SEO” and “Accessibility”. Furthermore, correlation analysis between these two done.

Design/methodology/approach

By opting for a webometric approach, the present study analyzed university library websites in the USA. The Lighthouse add-on for the Google Chrome browser has been used as a data collection tool, by writing and implementing a computer program in Bash language automatically (May 2020). Data analysis tools used were “Libre-Office-Calc”, “SPSS22” and “Excel”.

Findings

In all 81 university library websites in the USA, Google search engine optimization (SEO) scores have been observed the amount higher than 60 (Total Score = 100). The accessibility rank obtained lay between 0.56 and 1 (Total Score = 1). A weak correlation relationship between “SEO score” and “accessibility rank” (P-value = 0.02, Spearman Correlation Coefficient = 0.345) was observed. This weak relationship can be explained due to the impact of several components affecting Google’s SEO score, one of them being having a high “accessibility rank”.

Practical implications

Given the increasing automation of library processes, SEO tools can help libraries in achieving their digital marketing goals.

Originality/value

Accurate measurement of the Google SEO score and accessibility rank for the university library websites (in the USA) were obtained by Lighthouse add-on for Google Chrome browser. Moreover, data extraction by the implementation of one program computer without the direct observation of human resources is the innovation of this study.

Details

Information Discovery and Delivery, vol. 51 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 12 July 2022

Karol Król and Dariusz Zdonek

Rural tourism facilities in Poland were very keen on amateur websites to promote their hospitality services from 2000 to 2018. In most cases, the websites were nonprofessional…

Abstract

Purpose

Rural tourism facilities in Poland were very keen on amateur websites to promote their hospitality services from 2000 to 2018. In most cases, the websites were nonprofessional, hosted on free servers and made by family members or friends of the holding. After search engine algorithms changed in 2015–2019, the websites started to go extinct on a large scale; they were deleted and often replaced with a more modern design and a commercial domain. These resources offered a rare opportunity to gain insight into rural tourism, rural changes and socioeconomic and cultural phenomena.

Design/methodology/approach

The paper’s objective is to demonstrate with an analysis of archived Polish rural tourism websites that digital cultural artefacts are generated in rural areas. The study was an analysis of selected development attributes of rural tourism websites found in the Internet Archive. The analysis involved those attributes that are important for determining whether a website or content can be considered digital cultural heritage assets.

Findings

The conclusions demonstrate that rural digital cultural heritage is a set of digital artefacts created in rural areas with their characteristics. Rural digital artefacts are records of ICT, infrastructure, environmental, cultural and socioeconomic changes.

Originality/value

The “digital assets” of rural areas are yet to be discussed in the context of rural cultural heritage, as a set of artefacts created in these areas and characteristic of them.

Details

Global Knowledge, Memory and Communication, vol. 73 no. 3
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 26 March 2024

Wondwesen Tafesse and Anders Wien

ChatGPT is a versatile technology with practical use cases spanning many professional disciplines including marketing. Being a recent innovation, however, there is a lack of…

Abstract

Purpose

ChatGPT is a versatile technology with practical use cases spanning many professional disciplines including marketing. Being a recent innovation, however, there is a lack of academic insight into its tangible applications in the marketing realm. To address this gap, the current study explores ChatGPT’s application in marketing by mining social media data. Additionally, the study employs the stages-of- growth model to assess the current state of ChatGPT’s adoption in marketing organizations.

Design/methodology/approach

The study collected tweets related to ChatGPT and marketing using a web-scraping technique (N = 23,757). A topic model was trained on the tweet corpus using latent Dirichlet allocation to delineate ChatGPT’s major areas of applications in marketing.

Findings

The topic model produced seven latent topics that encapsulated ChatGPT’s major areas of applications in marketing including content marketing, digital marketing, search engine optimization, customer strategy, B2B marketing and prompt engineering. Further analyses reveal the popularity of and interest in these topics among marketing practitioners.

Originality/value

The findings contribute to the literature by offering empirical evidence of ChatGPT’s applications in marketing. They demonstrate the core use cases of ChatGPT in marketing. Further, the study applies the stages-of-growth model to situate ChatGPT’s current state of adoption in marketing organizations and anticipate its future trajectory.

Details

Marketing Intelligence & Planning, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0263-4503

Keywords

1 – 10 of over 1000