Search results

1 – 10 of 68
Book part
Publication date: 11 December 2023

Charlie Gregson and Steve Little

Sherwood Forest is a mosaic of heritage, habitats and stakeholder relations. Scheme Manager, Steve Little, and Senior Lecturer in Museum Studies, Charlie Gregson, share their…

Abstract

Sherwood Forest is a mosaic of heritage, habitats and stakeholder relations. Scheme Manager, Steve Little, and Senior Lecturer in Museum Studies, Charlie Gregson, share their story of developing a working methodology in this complex landscape. By evaluating their relationship through the lenses of knowledge brokering and collaborative mentoring, they identify six themes relating to how their working environment evolved and functioned. Discussion finds significant overlap between collaborative mentoring, KE and the attainment of the Sustainable Development Goals in their ability to enable more nuanced and holistic changemaking that is contextualized in a deep understanding of need.

Knowledge brokering, a process by which an individual (or an organization) supports the transfer of research evidence into policy and practice, can improve evidence-based decision-making through knowledge exchange (KE) but is, on the whole, poorly defined in academia (Cvitanovic et al., 2017). This chapter seeks to contribute to the ‘necessary and urgent’ need for evaluation of KE in practice (Rycroft-Smith, 2022) by providing edited snippets of dialogue, analysis and key learning points. It is intended as inspiration and encouragement for academics, professionals, students and volunteers developing human-centric projects or design-thinking methodologies between universities and external partners.

Details

Mentoring Within and Beyond Academia
Type: Book
ISBN: 978-1-83797-565-5

Keywords

Article
Publication date: 12 January 2024

Akmal Mirsadikov, Ali Vedadi and Kent Marett

With the widespread use of online communications, users are extremely vulnerable to a myriad of deception attempts. This study aims to extend the literature on deception in…

Abstract

Purpose

With the widespread use of online communications, users are extremely vulnerable to a myriad of deception attempts. This study aims to extend the literature on deception in computer-mediated communication by investigating whether the manner in which popularity information (PI) is presented and media richness affects users’ judgments.

Design/methodology/approach

This study developed a randomized, within and 2 × 3 between-subject experimental design. This study analyzed the main effects of PI and media richness on the imitation magnitude of veracity judges and the effect of the interaction between PI and media richness on the imitation magnitude of veracity judges.

Findings

The manner in which PI is presented to people affects their tendency to imitate others. Media richness also has a main effect; text-only messages resulted in greater imitation magnitude than those viewed in full audiovisual format. The findings showed an interaction effect between PI and media richness.

Originality/value

The findings of this study contribute to the information systems literature by introducing the notion of herd behavior to judgments of truthfulness and deception. Also, the medium over which PI was presented significantly impacted the magnitude of imitation tendency: PI delivered through text-only medium led to a greater extent of imitation than when delivered in full audiovisual format. This suggests that media richness alters the degree of imitating others’ decisions such that the leaner the medium, the greater the expected extent of imitation.

Details

Information & Computer Security, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 12 December 2022

Paul Di Gangi, Robin Teigland and Zeynep Yetis

This research investigates how the value creation interests and activities of different stakeholder groups within one open source software (OSS) project influence the project's…

Abstract

Purpose

This research investigates how the value creation interests and activities of different stakeholder groups within one open source software (OSS) project influence the project's development over time.

Design/methodology/approach

The authors conducted a case study of OpenSimulator using textual and thematic analyses of the initial four years of OpenSimulator developer mailing list to identify each stakeholder group and guide our analysis of their interests and value creation activities over time.

Findings

The analysis revealed that while each stakeholder group was active within the OSS project's development, the different groups possessed complementary interests that enabled the project to evolve. In the formative period, entrepreneurs were interested in the software's strategic direction in the market, academics and SMEs in software functionality and large firms and hobbyists in software testing. Each group retained its primary interest in the maturing period with academics and SMEs separating into server- and client-side usability. The analysis shed light on how the different stakeholder groups overcame tensions amongst themselves and took specific actions to sustain the project.

Originality/value

The authors extend stakeholder theory by reconceptualizing the focal organization and its stakeholders for OSS projects. To date, OSS research has primarily focused on examining one project relative to its marketplace. Using stakeholder theory, we identified stakeholder groups within a single OSS project to demonstrate their distinct interests and how these interests influence their value creation activities over time. Collectively, these interests enable the project's long-term development.

Details

Information Technology & People, vol. 36 no. 7
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 26 December 2023

Annette Markham and Riccardo Pronzato

This paper aims to explore how critical digital and data literacies are facilitated by testing different methods in the classroom, with the ambition to find a pedagogical…

Abstract

Purpose

This paper aims to explore how critical digital and data literacies are facilitated by testing different methods in the classroom, with the ambition to find a pedagogical framework for prompting sustained critical literacies.

Design/methodology/approach

This contribution draws on a 10-year set of critical pedagogy experiments conducted in Denmark, USA and Italy, and engaging more than 1,500 young adults. Multi-method pedagogical design trains students to conduct self-oriented guided autoethnography, situational analysis, allegorical mapping, and critical infrastructure analysis.

Findings

The techniques of guided autoethnography for facilitating sustained data literacy rely on inviting multiple iterations of self-analysis through sequential prompts, whereby students move through stages of observation, critical thinking, critical theory-informed critique around the lived experience of hegemonic data and artificial intelligence (AI) infrastructures.

Research limitations/implications

Critical digital/data literacy researchers should continue to test models for building sustained critique that not only facilitate changes in behavior over time but also facilitate citizen social science, whereby participants use these autoethnographic techniques with friends and families to build locally relevant critique of the hegemonic power of data/AI infrastructures.

Originality/value

The proposed literacy model adopts a critical theory stance and shows the value of using multiple modes of intervention at micro and macro levels to prompt self-analysis and meta-level reflexivity for learners. This framework places critical theory at the center of the pedagogy to spark more radical stances, which is contended to be an essential step in moving students from attitudinal change to behavioral change.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 10 October 2023

A. Lynn Matthews and Sarah S.F. Luebke

Moral transgressions committed by person-brands can negatively impact consumers through the transgression’s diagnosticity (severity, centrality and consistency). This paper aims…

Abstract

Purpose

Moral transgressions committed by person-brands can negatively impact consumers through the transgression’s diagnosticity (severity, centrality and consistency). This paper aims to test how a transgression’s centrality and consistency impact important consumer perceptions and behavioral intentions toward a person-brand, holding constant the transgression in question. These outcomes are crucial for person-brands to understand how to minimize and manage the impact of a given transgression.

Design/methodology/approach

This paper uses three online consumer experiments to manipulate transgression diagnosticity via centrality and consistency and identifies the resulting impact on consumer-brand identification, trustworthiness and consumer digital engagement intentions through PROCESS models.

Findings

High-diagnosticity transgressions lower consumer digital engagement intentions regarding the person-brand and their endorsed products. This effect is serially mediated by consumer-brand identification, as predicted by social identity theory, and by perceived trustworthiness of the person-brand.

Practical implications

Person-brands should emphasize the nondiagnostic nature of any transgressions in which they are involved, including a lack of centrality and consistency with their brand, and guard against the appearance of diagnostic transgressions.

Originality/value

This paper shows that transgression diagnosticity impacts consumer engagement through the pathway of consumer-brand identification and trustworthiness. It also manipulates aspects of diagnosticity that can be influenced by the person-brand (centrality and consistency) while holding the transgression constant. As such, this paper extends the literature on transgressions, on person-branding strategy, and on social identity theory.

Details

Journal of Product & Brand Management, vol. 32 no. 8
Type: Research Article
ISSN: 1061-0421

Keywords

Article
Publication date: 18 January 2024

Adebowale Jeremy Adetayo, Mariam Oyinda Aborisade and Basheer Abiodun Sanni

This study aims to explore the collaborative potential of Microsoft Copilot and Anthropic Claude AI as an assistive technology in education and library services. The research…

Abstract

Purpose

This study aims to explore the collaborative potential of Microsoft Copilot and Anthropic Claude AI as an assistive technology in education and library services. The research delves into technical architectures and various use cases for both tools, proposing integration strategies within educational and library environments. The paper also addresses challenges such as algorithmic bias, hallucination and data rights.

Design/methodology/approach

The study used a literature review approach combined with the proposal of integration strategies across education and library settings.

Findings

The collaborative framework between Copilot and Claude AI offers a comprehensive solution for transforming education and library services. The study identifies the seamless combination of real-time internet access, information retrieval and advanced comprehension features as key findings. In addition, challenges such as algorithmic bias and data rights are addressed, emphasizing the need for responsible AI governance, transparency and continuous improvement.

Originality/value

Contribute to the field by exploring the unique collaborative framework of Copilot and Claude AI in a specific context, emphasizing responsible AI governance and addressing existing gaps.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Expert briefing
Publication date: 8 November 2023
Expert Briefings Powered by Oxford Analytica

Prospects for cybersecurity in 2024

These include the impact of generative artificial intelligence (GenAI) on cyberattacks, espionage and influencer campaigns; supply-chain risks; and the emergence of a software…

Details

DOI: 10.1108/OXAN-DB283227

ISSN: 2633-304X

Keywords

Geographic
Topical
Open Access
Article
Publication date: 23 May 2023

Kimmo Kettunen, Heikki Keskustalo, Sanna Kumpulainen, Tuula Pääkkönen and Juha Rautiainen

This study aims to identify user perception of different qualities of optical character recognition (OCR) in texts. The purpose of this paper is to study the effect of different…

Abstract

Purpose

This study aims to identify user perception of different qualities of optical character recognition (OCR) in texts. The purpose of this paper is to study the effect of different quality OCR on users' subjective perception through an interactive information retrieval task with a collection of one digitized historical Finnish newspaper.

Design/methodology/approach

This study is based on the simulated work task model used in interactive information retrieval. Thirty-two users made searches to an article collection of Finnish newspaper Uusi Suometar 1869–1918 which consists of ca. 1.45 million autosegmented articles. The article search database had two versions of each article with different quality OCR. Each user performed six pre-formulated and six self-formulated short queries and evaluated subjectively the top 10 results using a graded relevance scale of 0–3. Users were not informed about the OCR quality differences of the otherwise identical articles.

Findings

The main result of the study is that improved OCR quality affects subjective user perception of historical newspaper articles positively: higher relevance scores are given to better-quality texts.

Originality/value

To the best of the authors’ knowledge, this simulated interactive work task experiment is the first one showing empirically that users' subjective relevance assessments are affected by a change in the quality of an optically read text.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 4 April 2024

Artur Strzelecki

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current…

Abstract

Purpose

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current form as complex technology-powered systems that offer a wide range of features and services.

Design/methodology/approach

In recent years, advancements in artificial intelligence (AI) technology have led to the development of AI-powered chat services. This study explores official announcements and releases of three major search engines, Google, Bing and Baidu, of AI-powered chat services.

Findings

Three major players in the search engine market, Google, Microsoft and Baidu started to integrate AI chat into their search results. Google has released Bard, later upgraded to Gemini, a LaMDA-powered conversational AI service. Microsoft has launched Bing Chat, renamed later to Copilot, a GPT-powered by OpenAI search engine. The largest search engine in China, Baidu, released a similar service called Ernie. There are also new AI-based search engines, which are briefly described.

Originality/value

This paper discusses the strengths and weaknesses of the traditional – algorithmic powered search engines and modern search with generative AI support, and the possibilities of merging them into one service. This study stresses the types of inquiries provided to search engines, users’ habits of using search engines and the technological advantage of search engine infrastructure.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

1 – 10 of 68