Search results

1 – 10 of over 1000
Article
Publication date: 7 May 2024

Mingfei Sun and Xu Dong

The proliferation of health misinformation on social media has increasingly engaged scholarly interest. This research examines the determinants influencing users’ proactive…

Abstract

Purpose

The proliferation of health misinformation on social media has increasingly engaged scholarly interest. This research examines the determinants influencing users’ proactive correction of health misinformation, a crucial strategy in combatting health misbeliefs. Grounded in the elaboration likelihood model (ELM), this research investigates how factors including issue involvement, information literacy and active social media use impact health misinformation recognition and intention to correct it.

Design/methodology/approach

A total of 413 social media users finished a national online questionnaire. SPSS 26.0, AMOS 21.0 and PROCESS Macro 4.1 were used to address the research hypotheses and questions.

Findings

Results indicated that issue involvement and information literacy both contribute to health misinformation correction intention (HMCI), while misinformation recognition acts as a mediator between information literacy and HMCI. Moreover, active social media use moderated the influence of information literacy on HMCI.

Originality/value

This study not only extends the ELM into the research domain of correcting health misinformation on social media but also enriches the perspective of individual fact-checking intention research by incorporating dimensions of users’ motivation, capability and behavioral patterns.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-09-2023-0505

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 28 February 2023

Shinichi Yamaguchi and Tsukasa Tanihara

In recent years, the social impact of misinformation has intensified. The purpose of this study is to clarify the mechanism by which misinformation spreads in society.

Abstract

Purpose

In recent years, the social impact of misinformation has intensified. The purpose of this study is to clarify the mechanism by which misinformation spreads in society.

Design/methodology/approach

Testing the following two hypotheses by a logit model analysis of survey data using actual fact-checked COVID-19 vaccine and political misinformation: people who believe that some misinformation is true are more likely to spread it than those who do not believe in its truthfulness; people with lower media and information literacy are more likely to spread misinformation than people with higher media and information literacy.

Findings

The two hypotheses are supported, and the trend was generally robust regardless of the method, whether the means of diffusion was social media or direct conversation.

Social implications

The authors derived the following four implications from the results: governments need to further promote media information literacy education; platform service providers should consider mechanisms to facilitate the spread and display of posts by people who are aware of misinformation; fact-checking should be further promoted; people should acquire information based on the assumption that people who believe in some misinformation tend to spread it more.

Originality/value

First, it quantitatively clarifies the relationship between misinformation, true/false judgements and dissemination behaviour. Second, it quantitatively clarifies the relationship between literacy and misinformation dissemination behaviour. Third, it conducts a comprehensive analysis of diffusion behaviours, including those outside of social media.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 11 July 2024

Bingbing Zhang, Avery E. Holton and Homero Gil de Zúñiga

In the past few years, research focusing on misinformation, referred to broadly as fake news, has experienced revived attention. Past studies have focused on explaining the ways…

Abstract

Purpose

In the past few years, research focusing on misinformation, referred to broadly as fake news, has experienced revived attention. Past studies have focused on explaining the ways in which people correct it online and on social media. However, fewer studies have dealt with the ways in which people are able to identify fake news (i.e. fake news literacy). This study contributes to the latter by theoretically connect people’s general social media use, political knowledge and political epistemic efficacy with individuals’ fake news literacy levels.

Design/methodology/approach

A diverse and representative two-wave panel survey in the United States was conducted (June 2019 for Wave 1, October 2019 for Wave 2). We performed cross-sectional, lagged and autoregressive regression analyses to examined how social media us, people’s political knowledge and political epistemic efficacy are related to their fake news literacy.

Findings

Results suggest that the more people used social media, were politically knowledgeable and considered they were able to find the truth in politics (i.e. epistemic political efficacy), the more likely they were to discern whether the news is fake. Implications of helping media outlets and policy makers be better positioned to provide the public with corrective action mechanisms in the struggle against fake news are discussed.

Research limitations/implications

The measurement instrument employed in the study relies on subjects’ self-assessment, as opposed to unobtrusive trace (big) digital data, which may not completely capture the nuances of people’s social media news behaviors.

Practical implications

This study sheds light on how the way people understand politics and gain confidence in finding political truth may be key elements when confronting and discerning fake news. With the help of these results, journalists, media outlets and policymakers may be better positioned to provide citizens with efficient, preemptive and corrective action mechanisms in the struggle against misinformation.

Originality/value

Recent literature highlights the importance of literacy education to contest fake news, but little is known about what specific mechanisms would contribute to foster and reinvigorate people’s fake news literacy. This study helps address this gap.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-03-2024-0140

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 24 June 2024

Lu Chen, Jing Jia, Manling Xiao, Chengzhen Wu and Luwen Zhang

This research exclusively focuses on China’s elderly Internet users given how severe a threat disinformation has become for this particular population group as social media…

Abstract

Purpose

This research exclusively focuses on China’s elderly Internet users given how severe a threat disinformation has become for this particular population group as social media platforms thrive and the number of elderly netizens grows in China. The purpose of this study is to explore the mechanism of how elderly social media users’ intention to identify false information is influenced helps supplement the knowledge system of false information governance and provides a basis for correction practices.

Design/methodology/approach

This study focuses on the digital literacy of elderly social media users and builds a theoretical model of their intention to identify false information based on the theory of planned behaviour. It introduces two variables – namely, risk perception and self-efficacy – and clarifies the relationships between the variables. Questionnaires were distributed both online and offline, with a total of 468 collected. A structural equation model was built for empirical analysis.

Findings

The results show that digital literacy positively influences risk perception, self-efficacy, subjective norms and perceived behavioural control. Risk perception positively influences subjective norms, perceived behavioural control and the attitude towards the identification of false information. Self-efficacy positively influences perceived behavioural control but does not significantly impact the intention to identify. Subjective norms positively influence the attitude towards identification and the intention to identify. Perceived behavioural control positively influences the attitude towards identification but does not significantly impact the intention to identify. The attitude towards identification positively influences the intention to identify.

Originality/value

Based on relevant theories and the results of the empirical analysis, this study provides suggestions for false information governance from the perspectives of social media platform collaboration and elderly social media users.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 August 2024

Allison Starks and Stephanie Michelle Reich

This study aims to explore children’s cognitions about data flows online and their understandings of algorithms, often referred to as algorithmic literacy or algorithmic folk…

Abstract

Purpose

This study aims to explore children’s cognitions about data flows online and their understandings of algorithms, often referred to as algorithmic literacy or algorithmic folk theories, in their everyday uses of social media and YouTube. The authors focused on children ages 8 to 11, as these are the ages when most youth acquire their own device and use social media and YouTube, despite platform age requirements.

Design/methodology/approach

Nine focus groups with 34 socioeconomically, racially and ethnically diverse children (8–11 years) were conducted in California. Groups discussed data flows online, digital privacy, algorithms and personalization across platforms.

Findings

Children had several misconceptions about privacy risks, privacy policies, what kinds of data are collected about them online and how algorithms work. Older children had more complex and partially accurate theories about how algorithms determine the content they see online, compared to younger children. All children were using YouTube and/or social media despite age gates and children used few strategies to manage the flow of their personal information online.

Practical implications

The paper includes implications for digital and algorithmic literacy efforts, improving the design of privacy consent practices and user controls, and regulation for protecting children’s privacy online.

Originality/value

Research has yet to explore what socioeconomically, racially and ethnically diverse children understand about datafication and algorithms online, especially in middle childhood.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 16 July 2024

Anna R. Oliveri and Jeffrey Paul Carpenter

The purpose of this conceptual paper is to describe how the affinity space concept has been used to frame learning via social media, and call for and discuss a refresh of the…

Abstract

Purpose

The purpose of this conceptual paper is to describe how the affinity space concept has been used to frame learning via social media, and call for and discuss a refresh of the affinity space concept to accommodate changes in social media platforms and algorithms.

Design/methodology/approach

Guided by a sociocultural perspective, this paper reviews and discusses some ways the affinity space concept has been used to frame studies across various contexts, its benefits and disadvantages and how it has already evolved. It then calls for and describes a refresh of the affinity space concept.

Findings

Although conceptualized 20 years ago, the affinity space concept remains relevant to understanding social media use for learning. However, a refresh is needed to accommodate how platforms have changed, algorithms’ evolving role in social media participation and how these technologies influence users’ interactions and experiences. This paper offers three perspectives to expand the affinity space concept’s usefulness in an increasingly platformized and algorithmically mediated world.

Practical implications

This paper underscores the importance of algorithmic literacy for learners and educators, as well as regulations and guidance for social media platforms.

Originality/value

This conceptual paper revisits and updates a widely utilized conceptual framing with consideration for how social media platform design and algorithms impact interactions and shape user experiences.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 19 September 2024

Chen Luo, Han Zheng, Yulong Tang and Xiaoya Yang

The mounting health misinformation on social media triggers heated discussions about how to address it. Anchored by the influence of presumed influence (IPI) model, this study…

Abstract

Purpose

The mounting health misinformation on social media triggers heated discussions about how to address it. Anchored by the influence of presumed influence (IPI) model, this study investigates the underlying process of intentions to combat health misinformation. Specifically, we analyzed how presumed exposure of others and presumed influence on others affect intentions to practice pre-emptive and reactive misinformation countering strategies.

Design/methodology/approach

Covariance-based structural equation modeling based on survey data from 690 Chinese participants was performed using the “lavaan” package in R to examine the proposed mechanism.

Findings

Personal attention to health information on social media is positively associated with presumed others’ attention to the same information, which, in turn, is related to an increased perception of health misinformation’s influence on others. The presumed influence is further positively tied to two pre-emptive countermeasures (i.e. support for media literacy interventions and institutional verification intention) and one reactive countermeasure (i.e. misinformation correction intention). However, the relationship between presumed influence and support for governmental restrictions, as another reactive countering method, is not significant.

Originality/value

This study supplements the misinformation countering literature by examining IPI’s tenability in explaining why individuals engage in combating misinformation. Both pre-emptive and reactive strategies were considered, enabling a panoramic view of the motivators of misinformation countering compared to previous studies. Our findings also inform the necessity of adopting a context-specific perspective and crafting other-oriented messages to motivate users’ initiative in implementing corrective actions.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 10 July 2024

Ju Hui Kang, Eun-Young Ko and Gi Woong Choi

This study aims to explore scientific discourses on vaccination in YouTube comments using the Connectivism theory as a foundational guide in the inquiry of understanding knowledge…

Abstract

Purpose

This study aims to explore scientific discourses on vaccination in YouTube comments using the Connectivism theory as a foundational guide in the inquiry of understanding knowledge seeking and sharing. The authors sought to understand how individuals share and seek information by using external sources through URL links to validate their arguments.

Design/methodology/approach

Using content analysis, the authors extracted and analysed 584 random comments with URL links from eight YouTube videos scientifically addressing the purpose of vaccines. The comments were coded by stance (pro, anti, and neutral) and the type of resource to observe how their links were used.

Findings

The results showed that URL links were composed of quotes, questions, and opinions. Many sources came from research papers, conspiracy websites, or other videos. Some of the comments did not accurately reflect the information from research papers and showed little scientific reasoning. This suggests the need for critical evaluation among individuals when finding information online.

Research limitations/implications

The findings can be expanded to explore different types of information literacy practices in the comment section of social media for both informal and formal environments.

Practical implications

YouTube is useful in fostering scientific discourse and information-seeking/sharing practices among individuals. However, considering the inaccuracy of content deliverance, educators and individuals will need to consider how to teach/conduct information literacy skills when implementing social media for educational purposes.

Originality/value

Only a few studies have conducted research on comments using URL links, the originality of sources and how the sources were used in argumentation.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 26 October 2023

Khurram Shahzad, Shakeel Ahmad Khan, Abid Iqbal, Omar Shabbir and Mujahid Latif

This paper aims to explore the determinants causing fake information proliferation on social media platforms and the challenges to control the diffusion of fake news phenomena.

Abstract

Purpose

This paper aims to explore the determinants causing fake information proliferation on social media platforms and the challenges to control the diffusion of fake news phenomena.

Design/methodology/approach

The authors applied the systematic review methodology to conduct a synthetic analysis of 37 articles published in peer-reviewed journals retrieved from 13 scholarly databases.

Findings

The findings of the study displayed that dissatisfaction, behavior modifications, trending practices to viral fake stories, natural inclination toward negativity and political purposes were the key determinants that led individuals to believe in fake news shared on digital media. The study also identified challenges being faced by people to control the spread of fake news on social networking websites. Key challenges included individual autonomy, the fast-paced social media ecosystem, fake accounts on social media, cutting-edge technologies, disparities and lack of media literacy.

Originality/value

The study has theoretical contributions through valuable addition to the body of existing literature and practical implications for policymakers to construct such policies that might prove successful antidote to stop the fake news cancer spreading everywhere via digital media. The study has also offered a framework to stop the diffusion of fake news.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 25 December 2023

Wenyan Yu, Yiping Jiang and Tingting Fu

This study holistically and systematically consolidates the available research on digital reading to reveal the research trends of the past 20 years. Moreover, it explores the…

Abstract

Purpose

This study holistically and systematically consolidates the available research on digital reading to reveal the research trends of the past 20 years. Moreover, it explores the thematic evolution, hotspots and developmental characteristics of digital reading. This study, therefore, has the potential to serve as a research guide to researchers and educators in relevant fields.

Design/methodology/approach

The authors applied a bibliometric approach using Derwent Data Analyzer and VOSviewer to retrieve 2,456 publications for 2003–2022 from the Web of Science (WoS) database.

Findings

The results revealed that most studies' participants were university students and the experimental methods and questionnaires were preferred in digital reading researches. Among the influential countries or regions, institutions, journals and authors, the United States of America, University of London, Electronic Library and Chen, respectively, accounted for the greatest number of publications. Moreover, the authors identified the developmental characteristics and research trends in the field of digital reading by analyzing the evolution of keywords from 2003–2017 to 2018–2022 and the most frequently cited papers by year. “E-books,” “reading comprehension” and “literacy” were the primary research topics. In addition, “attention,” “motivation,” “cognitive load,” “dyslexia,” “engagement,” “eye-tracking,” “eye movement,” “systematic analysis,” “meta-analysis,” “smartphone” and “mobile reading/learning” were potential new research hotspots.

Originality/value

This study provides valuable insights into the current status, research direction, thematic evolution and developmental characteristics in the field of digital reading. Therefore, it has implications for publishers, researchers, librarians, educators and teachers in the digital reading field.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 1000