Search results
1 – 8 of 8Mike Thelwall and Kayvan Kousha
Technology is sometimes used to support assessments of academic research in the form of automatically generated bibliometrics for reviewers to consult during their evaluations or…
Abstract
Purpose
Technology is sometimes used to support assessments of academic research in the form of automatically generated bibliometrics for reviewers to consult during their evaluations or by replacing some or all human judgements. With artificial intelligence (AI), there is increasing scope to use technology to assist research assessment processes in new ways. Since transparency and fairness are widely considered important for research assessment and AI introduces new issues, this review investigates their implications.
Design/methodology/approach
This article reviews and briefly summarises transparency and fairness concerns in general terms and through the issues that they raise for various types of Technology Assisted Research Assessment (TARA).
Findings
Whilst TARA can have varying levels of problems with both transparency and bias, in most contexts it is unclear whether it worsens the transparency and bias problems that are inherent in peer review.
Originality/value
This is the first analysis that focuses on algorithmic bias and transparency issues for technology assisted research assessment.
Details
Keywords
Mike Thelwall, Kayvan Kousha, Mahshid Abdoli, Emma Stuart, Meiko Makita, Paul Wilson and Jonathan M. Levitt
Scholars often aim to conduct high quality research and their success is judged primarily by peer reviewers. Research quality is difficult for either group to identify, however…
Abstract
Purpose
Scholars often aim to conduct high quality research and their success is judged primarily by peer reviewers. Research quality is difficult for either group to identify, however and misunderstandings can reduce the efficiency of the scientific enterprise. In response, we use a novel term association strategy to seek quantitative evidence of aspects of research that are associated with high or low quality.
Design/methodology/approach
We extracted the words and 2–5-word phrases most strongly associated with different quality scores in each of 34 Units of Assessment (UoAs) in the Research Excellence Framework (REF) 2021. We extracted the terms from 122,331 journal articles 2014–2020 with individual REF2021 quality scores.
Findings
The terms associating with high- or low-quality scores vary between fields but relate to writing styles, methods and topics. We show that the first-person writing style strongly associates with higher quality research in many areas because it is the norm for a set of large prestigious journals. We found methods and topics that associate with both high- and low-quality scores. Worryingly, terms associated with educational and qualitative research attract lower quality scores in multiple areas. REF experts may rarely give high scores to qualitative or educational research because the authors tend to be less competent, because it is harder to do world leading research with these themes, or because they do not value them.
Originality/value
This is the first investigation of journal article terms associating with research quality.
Details
Keywords
Mike Thelwall, Kayvan Kousha, Emma Stuart, Meiko Makita, Mahshid Abdoli, Paul Wilson and Jonathan M. Levitt
To assess whether interdisciplinary research evaluation scores vary between fields.
Abstract
Purpose
To assess whether interdisciplinary research evaluation scores vary between fields.
Design/methodology/approach
The authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.
Findings
The authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.
Research limitations/implications
The results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.
Practical implications
The results underline the importance of choosing relevant fields for any type of research evaluation.
Originality/value
This is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.
Details
Keywords
Nushrat Khan, Mike Thelwall and Kayvan Kousha
This study investigates differences and commonalities in data production, sharing and reuse across the widest range of disciplines yet and identifies types of improvements needed…
Abstract
Purpose
This study investigates differences and commonalities in data production, sharing and reuse across the widest range of disciplines yet and identifies types of improvements needed to promote data sharing and reuse.
Design/methodology/approach
The first authors of randomly selected publications from 2018 to 2019 in 20 Scopus disciplines were surveyed for their beliefs and experiences about data sharing and reuse.
Findings
From the 3,257 survey responses, data sharing and reuse are still increasing but not ubiquitous in any subject area and are more common among experienced researchers. Researchers with previous data reuse experience were more likely to share data than others. Types of data produced and systematic online data sharing varied substantially between subject areas. Although the use of institutional and journal-supported repositories for sharing data is increasing, personal websites are still frequently used. Combining multiple existing datasets to answer new research questions was the most common use. Proper documentation, openness and information on the usability of data continue to be important when searching for existing datasets. However, researchers in most disciplines struggled to find datasets to reuse. Researchers' feedback suggested 23 recommendations to promote data sharing and reuse, including improved data access and usability, formal data citations, new search features and cultural and policy-related disciplinary changes to increase awareness and acceptance.
Originality/value
This study is the first to explore data sharing and reuse practices across the full range of academic discipline types. It expands and updates previous data sharing surveys and suggests new areas of improvement in terms of policy, guidance and training programs.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2021-0423.
Details
Keywords
Rongying Zhao, Weijie Zhu, He Huang and Wenxin Chen
Social mediametrics is a subfield of measurement in which the emphasis is placed on social media data. This paper analyzes the trends and patterns of paper comprehensively…
Abstract
Purpose
Social mediametrics is a subfield of measurement in which the emphasis is placed on social media data. This paper analyzes the trends and patterns of paper comprehensively mentions on Twitter, with a particular focus on Twitter's mention behaviors. It uncovers the dissemination patterns and impact of academic literature on social media. The research has significant theoretical and practical implications.
Design/methodology/approach
This paper explores the fundamental attributes of Twitter mentions by means of analyzing 9,476 pieces of scholarly literature (5,097 from Nature and 4,379 from Science), 1,474,898 tweets and 451,567 user information collected from Altmetric.com database and Twitter API. The study uncovers assorted Twitter mention characteristics, mention behavior patterns and data accumulation patterns.
Findings
The findings illustrate that the top academic journals on Twitter have a wider range of coverage and display similar distribution patterns to other academic communication platforms. A large number of mentioners remain unidentified, and the distribution of follower counts among the mention users exhibits a significant Pareto effect, indicating a small group of highly influential users who generate numerous mentions. Furthermore, the proportion of sharing and exchange mentions positively correlates with the number of user followers, while the incidence of supportive mentions has a negative correlation. In terms of country-specific mention behavior, Thai scholars tend to utilize supportive mentions more frequently, whereas Korean scholars prefer sharing mentions over communicating mentions. The cumulative pattern of Twitter mentions suggests that these occur before official publication, with a half-life of 6.02 days and a considerable reduction in the number of mentions is observed on the seventh day after publication.
Originality/value
Conducting a multi-dimensional and systematic analysis of Twitter mentions of scholarly articles can aid in comprehending and utilizing social media communication patterns. This analysis can uncover literature's distribution patterns, dissemination effects and social significance in social media.
Details
Keywords
Despite the widespread studies on the attitudes about OA, there exists little comparative evidence about the opinions of author and non-author parties at a global level in a…
Abstract
Purpose
Despite the widespread studies on the attitudes about OA, there exists little comparative evidence about the opinions of author and non-author parties at a global level in a social context. To bridge the gap, this study first investigated the opinions of the users who posted at least one tweet about OA in 2019. Then, it zoomed in to explore the views of the OA-interested tweeters, i.e. the users who have posted five or more tweets about OA.
Design/methodology/approach
Using a content analysis method, with an opinion-mining approach, this study examined a sample of 9,268 OA-related tweets posted by 5,227 tweeters in 2019. The sentiments were analyzed using SentiStrength. A threshold of at least five tweets was set to identify the OA-interested tweeters.
Findings
Academics and scholars, library and information professionals, and journals and publishers were the main OA-interested tweeters, implying that OA debates have not been widely propagated from its traditional audience to the general public. Despite an overall positive attitude, the tweeters showed negative perspectives about the gold and hybrid models, validity and quality, and costs and funds. The negativity depended on the OA features tweeted, the tweeters' occupations and gender, as well as the trends.
Research limitations/implications
The low societal impact of the OA debates calls for solutions to attract the public's attention and to exploit their potential to achieve the OA ideals. The OA stakeholders' divergence necessitates finding solutions to remedy the pitfalls. It also underlines the need for scrutiny into social layers when studying society's opinions and behaviors in a social network.
Originality/value
This is the first study in estimating the extent of the societal impact of OA debates, comparing the social OA stakeholders' opinions and their dependence on the OA features tweeted, the tweeter roles and gender and the tweet trending status.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-09-2022-0502
Details
Keywords
Swagota Saikia, Vinit Kumar and Manoj Kumar Verma
The purpose of this study was to perform sentiment analysis and analyze the growth and popularity of Drupal, Joomla and WordPress on YouTube over a four-year period. This included…
Abstract
Purpose
The purpose of this study was to perform sentiment analysis and analyze the growth and popularity of Drupal, Joomla and WordPress on YouTube over a four-year period. This included identifying the most liked and commented videos for each content management system (CMS), ranking the CMSs based on the number of positive comments they received, and using natural language processing techniques to identify the top ten most frequently appearing words in videos about the CMSs.
Design/methodology/approach
The data for assessing the features of the videos of Drupal, WordPress and Joomla was extracted using Webometric Analyst version 4.4. with the help of the YouTube application programming interface key for videos on the selected CMSs uploaded from 2019 to 2022. The extraction of comments and sentiment analysis for the relevant videos was done using Mozdeh.
Findings
This study scrutinized 371, 234 and 313 videos of WordPress, Joomla and Drupal on YouTube. The findings reveal that there is a chronological growth of videos of the three CMSs in four years and till the present time, WordPress has the highest number of videos followed by Drupal and then Joomla. Regarding the ranking of highly liked videos, WordPress again wins the list with the highest number of likes in its videos followed by Drupal and then Joomla. For analyzing sentiments of the total comments extracted 123,409 for WordPress, 1,790 for Joomla and 1,783 for Drupal, respectively, WordPress receives the highest average positive comments followed by Drupal then Joomla. In top word frequency, the word “thank” highly occurs and viewers are asking for more tutorial videos.
Originality/value
To the best of the authors’ knowledge, this study is the first attempt for analyzing the sentiments of WordPress, Drupal and Joomla using Mozdeh software within the concerning period.
Details