Search results

1 – 10 of 482
Article
Publication date: 24 June 2019

Christian Matt, Thomas Hess and Christian Weiß

The purpose of this paper is to explore the effects of online recommender systems (RS) on three types of diversity: algorithmic recommendation diversity, perceived recommendation

Abstract

Purpose

The purpose of this paper is to explore the effects of online recommender systems (RS) on three types of diversity: algorithmic recommendation diversity, perceived recommendation diversity and sales diversity. The analysis distinguishes different recommendation algorithms and shows whether user perceptions match the actual effects of RS on sales.

Design/methodology/approach

An online experiment was conducted using a realistic shop design, various recommendation algorithms and a representative consumer sample to ensure the generalizability of the findings.

Findings

Recommendation algorithms show a differential impact on sales diversity, but only collaborative filtering can lead to higher sales diversity. However, some of these effects are subject to how much information firms have about users’ preferences. The level of recommendation diversity perceived by users does not always reflect the factual diversity effects.

Research limitations/implications

Recommendation and consumption patterns might differ for other types of products; future studies should replicate the study with search or credence goods. The authors also recommend that future research should move from taking a unidimensional measure for the assessment of diversity and employ multidimensional measures instead.

Practical implications

Online shops need to conduct a more comprehensive assessment of their RS’ effect on diversity, taking into account not only the effects on their sales distribution, but also on users’ perceptions and faith in the recommendation algorithm.

Originality/value

This study offers a framework for assessing different forms of diversity in online RS. It employs various recommendation algorithms and compares their impact using not just one but three different types of diversity measures. This helps explaining some of the contradictious findings from the previous literature.

Open Access
Article
Publication date: 16 January 2024

Ville Jylhä, Noora Hirvonen and Jutta Haider

This study addresses how algorithmic recommendations and their affordances shape everyday information practices among young people.

Abstract

Purpose

This study addresses how algorithmic recommendations and their affordances shape everyday information practices among young people.

Design/methodology/approach

Thematic interviews were conducted with 20 Finnish young people aged 15–16 years. The material was analysed using qualitative content analysis, with a focus on everyday information practices involving online platforms.

Findings

The key finding of the study is that the current affordances of algorithmic recommendations enable users to engage in more passive practices instead of active search and evaluation practices. Two major themes emerged from the analysis: enabling not searching, inviting high trust, which highlights the how the affordances of algorithmic recommendations enable the delegation of search to a recommender system and, at the same time, invite trust in the system, and constraining finding, discouraging diversity, which focuses on the constraining degree of affordances and breakdowns associated with algorithmic recommendations.

Originality/value

This study contributes new knowledge regarding the ways in which algorithmic recommendations shape the information practices in young people's everyday lives specifically addressing the constraining nature of affordances.

Details

Journal of Documentation, vol. 80 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 26 June 2019

Mariella Bastian, Mykola Makhortykh and Tom Dobber

The purpose of this paper is to develop a conceptual framework for assessing what are the possibilities and pitfalls of using algorithmic systems of news personalization – i.e…

1064

Abstract

Purpose

The purpose of this paper is to develop a conceptual framework for assessing what are the possibilities and pitfalls of using algorithmic systems of news personalization – i.e. the tailoring of individualized news feeds based on users’ information preferences – for constructive conflict coverage in the context of peace journalism, a journalistic paradigm calling for more diversified and creative war reporting.

Design/methodology/approach

The paper provides a critical review of existing research on peace journalism and algorithmic news personalization, and analyzes the intersections between the two concepts. Specifically, it identifies recurring pitfalls of peace journalism based on empirical research on constructive conflict coverage and then introduces a conceptual framework for analyzing to what degree these pitfalls can be mediated – or worsened – through algorithmic system design.

Findings

The findings suggest that AI-driven distribution technologies can facilitate constructive war reporting, in particular by countering the effects of journalists’ self-censorship and by diversifying conflict coverage. The implementation of these goals, however, depends on multiple system design solutions, thus resonating with current calls for more responsible and value-sensitive algorithmic design in the domain of news media. Additionally, our observations emphasize the importance of developing new algorithmic literacies among journalists both to realize the positive potential of AI for promoting peace and to increase the awareness of possible negative impacts of new systems of content distribution.

Originality/value

The article particle is the first to provide a comprehensive conceptualization of the impact of new content distribution techniques on constructive conflict coverage in the context of peace journalism. It also offers a novel conceptual framing for assessing the impact of algorithmic news personalization on reporting traumatic and polarizing events, such as wars and violence.

Details

International Journal of Conflict Management, vol. 30 no. 3
Type: Research Article
ISSN: 1044-4068

Keywords

Article
Publication date: 21 December 2021

Luciana Monteiro-Krebs, Bieke Zaman, Sonia Elisa Caregnato, David Geerts, Vicente Grassi-Filho and Nyi-Nyi Htun

The use of recommender systems is increasing on academic social media (ASM). However, distinguishing the elements that may be influenced and/or exert influence over content that…

Abstract

Purpose

The use of recommender systems is increasing on academic social media (ASM). However, distinguishing the elements that may be influenced and/or exert influence over content that is read and disseminated by researchers is difficult due to the opacity of the algorithms that filter information on ASM. In this article, the purpose of this paper is to investigate how algorithmic mediation through recommender systems in ResearchGate may uphold biases in scholarly communication.

Design/methodology/approach

The authors used a multi-method walkthrough approach including a patent analysis, an interface analysis and an inspection of the web page code.

Findings

The findings reveal how audience influences on the recommendations and demonstrate in practice the mutual shaping of the different elements interplaying within the platform (artefact, practices and arrangements). The authors show evidence of the mechanisms of selection, prioritization, datafication and profiling. The authors also substantiate how the algorithm reinforces the reputation of eminent researchers (a phenomenon called the Matthew effect). As part of defining a future agenda, we discuss the need for serendipity and algorithmic transparency.

Research limitations/implications

Algorithms change constantly and are protected by commercial secrecy. Hence, this study was limited to the information that was accessible within a particular period. At the time of publication, the platform, its logic and its effects on the interface may have changed. Future studies might investigate other ASM using the same approach to distinguish potential patterns among platforms.

Originality/value

Contributes to reflect on algorithmic mediation and biases in scholarly communication potentially afforded by recommender algorithms. To the best of our knowledge, this is the first empirical study on automated mediation and biases in ASM.

Details

Online Information Review, vol. 46 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 19 December 2023

Susan Gardner Archambault

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…

Abstract

Purpose

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.

Design/methodology/approach

Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.

Findings

The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.

Originality/value

This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.

Details

Information and Learning Sciences, vol. 125 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Content available
Article
Publication date: 14 March 2023

Paula Hall and Debbie Ellis

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…

3344

Abstract

Purpose

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.

Design/methodology/approach

A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.

Findings

Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).

Originality/value

This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.

Peer review

The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452

Details

Online Information Review, vol. 47 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 15 November 2022

Sergei Kamolov and Nikita Aleksandrov

In the context of digital public governance of the 21st century, recommender systems serve as a digital tool to support decision-making and shift toward proactive public services…

Abstract

Purpose

In the context of digital public governance of the 21st century, recommender systems serve as a digital tool to support decision-making and shift toward proactive public services delivery. This paper aims to synthesize an algorithm for public recommender systems deployment coherent with the digital transformation of public services in three Russian regions: the City of Moscow, Moscow region and Astrakhan region.

Design/methodology/approach

The studied regions serve as an adequate representation of the country’s population coverage carrying, at the same time, diversity of public governance structures in qualitative and quantitative terms. Thus, the authors were able to retrieve both commonalities and particularities in locally applied policies to create an algorithm model for governance high-tech decision support systems (DSS) deployment in management terms. Therefore, the authors use structural and functional analysis to derive the matters for further induction into our algorithmic model.

Findings

The proposed algorithmic model is developed under the framework of automated verification of current public service delivery mechanisms. The practical application of recommendation systems as a special case of DSS is shown in the example of public service delivery. It is assumed that following the developed algorithm leads to the “digital maturity” of a particular sector of public governance.

Originality/value

The paper holds a novel look at public services digital transformation through the application of recommender systems, which is evidenced by the algorithmic model approbation on the theoretical level.

Details

Transforming Government: People, Process and Policy, vol. 17 no. 1
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 15 November 2019

Claude Draude, Goda Klumbyte, Phillip Lücking and Pat Treusch

The purpose of this paper is to propose that in order to tackle the question of bias in algorithms, a systemic, sociotechnical and holistic perspective is needed. With reference…

1559

Abstract

Purpose

The purpose of this paper is to propose that in order to tackle the question of bias in algorithms, a systemic, sociotechnical and holistic perspective is needed. With reference to the term “algorithmic culture,” the interconnectedness and mutual shaping of society and technology are postulated. A sociotechnical approach requires translational work between and across disciplines. This conceptual paper undertakes such translational work. It exemplifies how gender and diversity studies, by bringing in expertise on addressing bias and structural inequalities, provide a crucial source for analyzing and mitigating bias in algorithmic systems.

Design/methodology/approach

After introducing the sociotechnical context, an overview is provided regarding the contemporary discourse around bias in algorithms, debates around algorithmic culture, knowledge production and bias identification as well as common solutions. The key concepts of gender studies (situated knowledges and strong objectivity) and concrete examples of gender bias then serve as a backdrop for revisiting contemporary debates.

Findings

The key concepts reframe the discourse on bias and concepts such as algorithmic fairness and transparency by contextualizing and situating them. The paper includes specific suggestions for researchers and practitioners on how to account for social inequalities in the design of algorithmic systems.

Originality/value

A systemic, gender-informed approach for addressing the issue is provided, and a concrete, applicable methodology toward a situated understanding of algorithmic bias is laid out, providing an important contribution for an urgent multidisciplinary dialogue.

Details

Online Information Review, vol. 44 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 27 July 2021

Louisa Ha, Mohammad Hatim Abuljadail, Claire Youngnyo Joa and Kisun Kim

This study aims to examine the difference between personalized and non-personalized recommendations in influencing YouTube users’ video choices. In addition, whether men and women…

1057

Abstract

Purpose

This study aims to examine the difference between personalized and non-personalized recommendations in influencing YouTube users’ video choices. In addition, whether men and women have a significant difference in using recommendations was compared and the predictors of recommendation video use frequency were explored.

Design/methodology/approach

A survey of 524 Saudi Arabia college students was conducted using computer-assisted self-administered interviews to collect their video recommendation sources and how likely they follow the recommendation from different sources.

Findings

Video links posted on social media used by the digital natives were found as the most effective form of recommendation shows that social approval is important in influencing trials. Recommendations can succeed in both personalized and non-personalized ways. Personalized recommendations as in YouTube recommended videos are almost the same as friends and family’s non-personalized posting of video links on social media in convincing people to watch the videos. Contrary to expectations, Saudi men college students are more likely to use recommendations than women students.

Research limitations/implications

The use of a non-probability sample is a major limitation and self-reported frequency may result in over- or under-estimation of video use.

Practical implications

Marketers will realize that they may not need the personalized recommendation from the large site. They can use social media recommendations by the consumers’ friends and family. E-mail is the worst platform for a recommendation.

Social implications

Recommendation is a credible source and can overcome the avoidance of advertising. Its influence on consumers will be increasing in years to come with the algorithmic recommendation and social media use.

Originality/value

This is the first study to compare the influence of different online recommendation sources and compare personalized and non-personalized recommendations. As recommendation is growing more and more important with algorithm development online, the study results have high reference values to marketers in Islamic countries and beyond.

Abstract

Details

Sameness and Repetition in Contemporary Media Culture
Type: Book
ISBN: 978-1-80455-955-0

1 – 10 of 482