Search results
1 – 10 of over 8000Hsien-Tsung Chang, Shu-Wei Liu and Nilamadhab Mishra
The purpose of this paper is to design and implement new tracking and summarization algorithms for Chinese news content. Based on the proposed methods and algorithms, the authors…
Abstract
Purpose
The purpose of this paper is to design and implement new tracking and summarization algorithms for Chinese news content. Based on the proposed methods and algorithms, the authors extract the important sentences that are contained in topic stories and list those sentences according to timestamp order to ensure ease of understanding and to visualize multiple news stories on a single screen.
Design/methodology/approach
This paper encompasses an investigational approach that implements a new Dynamic Centroid Summarization algorithm in addition to a Term Frequency (TF)-Density algorithm to empirically compute three target parameters, i.e., recall, precision, and F-measure.
Findings
The proposed TF-Density algorithm is implemented and compared with the well-known algorithms Term Frequency-Inverse Word Frequency (TF-IWF) and Term Frequency-Inverse Document Frequency (TF-IDF). Three test data sets are configured from Chinese news web sites for use during the investigation, and two important findings are obtained that help the authors provide more precision and efficiency when recognizing the important words in the text. First, the authors evaluate three topic tracking algorithms, i.e., TF-Density, TF-IDF, and TF-IWF, with the said target parameters and find that the recall, precision, and F-measure of the proposed TF-Density algorithm is better than those of the TF-IWF and TF-IDF algorithms. In the context of the second finding, the authors implement a blind test approach to obtain the results of topic summarizations and find that the proposed Dynamic Centroid Summarization process can more accurately select topic sentences than the LexRank process.
Research limitations/implications
The results show that the tracking and summarization algorithms for news topics can provide more precise and convenient results for users tracking the news. The analysis and implications are limited to Chinese news content from Chinese news web sites such as Apple Library, UDN, and well-known portals like Yahoo and Google.
Originality/value
The research provides an empirical analysis of Chinese news content through the proposed TF-Density and Dynamic Centroid Summarization algorithms. It focusses on improving the means of summarizing a set of news stories to appear for browsing on a single screen and carries implications for innovative word measurements in practice.
Details
Keywords
Donghee Shin, Saifeddin Al-Imamy and Yujong Hwang
How does algorithmic information processing affect the thoughts and behavior of artificial intelligence (AI) users? In this study, the authors address this question by focusing on…
Abstract
Purpose
How does algorithmic information processing affect the thoughts and behavior of artificial intelligence (AI) users? In this study, the authors address this question by focusing on algorithm-based chatbots and examine the influence of culture on algorithms as a form of digital intermediation.
Design/methodology/approach
The authors conducted a study comparing the United States (US) and Japan to examine how users in the two countries perceive the features of chatbot services and how the perceived features affect user trust and emotion.
Findings
Clear differences emerged after comparing algorithmic information processes involved in using and interacting with chatbots. Major attitudes toward chatbots are similar between the two cultures, although the weights placed on qualities differ. Japanese users put more weight on the functional qualities of chatbots, and US users place greater emphasis on non-functional qualities of algorithms in chatbots. US users appear more likely to anthropomorphize and accept explanations of algorithmic features than Japanese users.
Research limitations/implications
Different patterns of chatbot news adoption reveal that the acceptance of chatbots involves a cultural dimension as the algorithms reflect the values and interests of their constituencies. How users perceive chatbots and how they consume and interact with the chatbots depends on the cultural context in which the experience is situated.
Originality/value
A comparative juxtaposition of cultural-algorithmic interactions offers a useful way to examine how cultural values influence user behaviors and identify factors that influence attitude and user acceptance. Results imply that chatbots can be a cultural artifact, and chatbot journalism (CJ) can be a socially contextualized practice that is driven by the user's input and behavior, which are reflections of cultural values and practices.
Details
Keywords
Rajshree Varma, Yugandhara Verma, Priya Vijayvargiya and Prathamesh P. Churi
The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global…
Abstract
Purpose
The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.
Design/methodology/approach
The detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.
Findings
The paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.
Originality/value
The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.
Details
Keywords
News algorithms not only help the authors to efficiently navigate the sea of available information, but also frame information in ways that influence public discourse and…
Abstract
Purpose
News algorithms not only help the authors to efficiently navigate the sea of available information, but also frame information in ways that influence public discourse and citizenship. Indeed, the likelihood that readers will be exposed to and read given news articles is structured into news algorithms. Thus, ensuring that news algorithms uphold journalistic values is crucial. In this regard, the purpose of this paper is to quantify journalistic values to make them readable by algorithms through taking an exploratory approach to a question that has not been previously investigated.
Design/methodology/approach
The author matched the textual indices (extracted from natural language processing/automated content analysis) with human conceptions of journalistic values (derived from survey analysis) by implementing partial least squares path modeling.
Findings
The results suggest that the numbers of words or quotes news articles contain have a strong association with the survey respondent assessments of their balance, diversity, importance and factuality. Linguistic polarization was an inverse indicator of respondents’ perception of balance, diversity and importance. While linguistic intensity was useful for gauging respondents’ perception of sensationalism, it was an ineffective indicator of importance and factuality. The numbers of adverbs and adjectives were useful for estimating respondents’ perceptions of factuality and sensationalism. In addition, the greater numbers of quotes, pair quotes and exclamation/question marks in news headlines were associated with respondents’ perception of lower journalistic values. The author also found that the assessment of journalistic values influences the perception of news credibility.
Research limitations/implications
This study has implications for computational journalism, credibility research and news algorithm development.
Originality/value
It represents the first attempt to quantify human conceptions of journalistic values with textual indices.
Details
Keywords
Krishnadas Nanath, Supriya Kaitheri, Sonia Malik and Shahid Mustafa
The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of…
Abstract
Purpose
The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news.
Design/methodology/approach
A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared.
Findings
The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly.
Practical implications
Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors.
Originality/value
While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
Details
Keywords
Maria José Baldessar and Regina Zandomênico
The production of news through artificial intelligence (AI) is a reality in many countries, including the United States, where two leading companies in this area are based. The…
Abstract
The production of news through artificial intelligence (AI) is a reality in many countries, including the United States, where two leading companies in this area are based. The purpose of this explanatory bibliographic research is to discuss who or what should bear the ethical responsibility for automated writing in a newsroom, considering that codes of ethics address only the conduct of people. This chapter compares the Code of Ethics of the National Federation of Brazilian Journalists with the code used in the United States and highlights examples of news released with incorrect information, produced by algorithms, and published by US news organizations. Facing the current informational scenario, the ethical responsibility of the automated news must be attributed to the media, represented in the figure of the editor. Such a conclusion is motivated by the fact that neither the algorithm nor the person who developed it takes part in the decision-making process of publishing an article or not.
Details