Search results

1 – 10 of 199
Article
Publication date: 10 July 2017

Ann E. Williams

The purpose of this paper is to provide an overview and evaluation of F1000, a publishing outlet and peer review system for research in the biomedical and life sciences.

Abstract

Purpose

The purpose of this paper is to provide an overview and evaluation of F1000, a publishing outlet and peer review system for research in the biomedical and life sciences.

Design/methodology/approach

The review chronicles the rise of F1000 and describes the site’s functionalities and use capabilities.

Findings

The findings detail both the strengths and limitations of F1000 and point toward avenues for continued research and development.

Originality/value

This is the first review to provide a substantive evaluation of F1000 for academics to consider when adopting, using and researching the platform.

Details

Information and Learning Science, vol. 118 no. 7/8
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 18 May 2015

Lutz Bornmann

– The purpose of this case study is to investigate the usefulness of altmetrics for measuring the broader impact of research.

1395

Abstract

Purpose

The purpose of this case study is to investigate the usefulness of altmetrics for measuring the broader impact of research.

Design/methodology/approach

This case study is based on a sample of 1,082 the Public Library of Science (PLOS) journal articles recommended in F1000. The data set includes altmetrics which were provided by PLOS. The F1000 data set contains tags on papers which were assigned by experts to characterise them.

Findings

The most relevant tag for altmetric research is “good for teaching”, as it is assigned to papers which could be of interest to a wider circle of readers than the peers in a specialised area. One could expect papers with this tag to be mentioned more often on Facebook and Twitter than those without this tag. The results from regression models were able to confirm these expectations: papers with this tag show significantly higher Facebook and Twitter counts than papers without this tag. This clear association could not be seen with Mendeley or Figshare counts (that is with counts from platforms which are chiefly of interest in a scientific context).

Originality/value

The results of the current study indicate that Facebook and Twitter, but not Figshare or Mendeley, might provide an indication of which papers are of interest to a broader circle of readers (and not only for the peers in a specialist area), and could therefore be useful for the measurement of the societal impact of research.

Details

Aslib Journal of Information Management, vol. 67 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Book part
Publication date: 12 June 2015

Samir Hachani

Peer review has been with humans for a long time. Its effective inception dates back to World War II resulting information overload, which imposed a quantitative and qualitative…

Abstract

Peer review has been with humans for a long time. Its effective inception dates back to World War II resulting information overload, which imposed a quantitative and qualitative screening of publications. Peer review was beset by a number of accusations and critics largely from the biases and subjective aspects of the process including the secrecy in which the processes became standard. Advent of the Internet in the early 1990s provided a manner to open peer review up to make it more transparent, less iniquitous, and more objective. This chapter investigates whether this openness led to a more objective manner of judging scientific publications. Three sites are examined: Electronic Transactions on Artificial Intelligence (ETAI), Atmospheric Chemistry and Physics (ACP), and Faculty of 1000 (F1000). These sites practice open peer review wherein reviewers and authors and their reviews and rebuttals are available for all to see. The chapter examines the different steps taken to allow reviewers and authors to interact and how this allows for the entire community to participate. This new prepublication reviewing of papers has to some extent, alleviated the biases that were previously preponderant and, furthermore, seems to give positive results and feedback. Although recent, experiences seem to have elicited scientists’ acceptance because openness allows for a more objective and fair judgment of research and scholarship. Yet, it will undoubtedly lead to new questions which are examined in this chapter.

Details

Current Issues in Libraries, Information Science and Related Fields
Type: Book
ISBN: 978-1-78441-637-9

Keywords

Article
Publication date: 19 January 2015

C. Brooke Dobni, Mark Klassen and W. Thomas Nelson

The USA is the world’s largest economy, but is it a leading innovation nation? As economies mature and slow in growth, innovation will prove to be a key driver in maintaining…

2426

Abstract

Purpose

The USA is the world’s largest economy, but is it a leading innovation nation? As economies mature and slow in growth, innovation will prove to be a key driver in maintaining transient advantage. This article presents a pulse on innovation in the USA as F1000 C-suite executives weigh in on their organization’s innovation health. It also compares the US score with proxy benchmark measures in other countries, and provides operational and strategic considerations to advance innovation platforms in US organizations. Managers will gain insight into common hurdles faced by some of America’s most prominent companies, as well as how to improve innovation practices in their own organization.

Design/methodology/approach

This current article reports on findings of innovation health in the USA based on responses from 1,127 F1000 executives (manager level and higher). F1000 executives report their innovation culture through completion of an innovation culture model survey developed by the authors. The F1000 is a listing created by Fortune magazine detailing the 1,000 largest companies in the USA based on revenues. This survey is considered one of the largest surveys on innovation culture in the USA to date.

Findings

One of the leading questions that this survey set out to answer is the current measure of innovation orientation amongst America’s largest organizations. Our findings suggest that US business is just beginning to catch the wave of innovation. Other major findings include: innovation amongst the F1000 is average at best; innovation is random and incremental; innovation strategy is missing in most organizations; there is an executive/employee innovation perception gap; innovation governance is missing; employees can not be blamed for a lack of innovation; and companies that fail to innovate will struggle even more.

Practical implications

There are a number of operational and strategic considerations presented to support the advancement of innovation in organizations. These include considerations around the leadership, resources, knowledge management and execution to strategically support innovation.

Originality/value

This is an original contribution in that it uses a scientifically developed model to measure innovation culture. It is the largest survey of innovation to date amongst the US Fortune 1000, and the finding present considerations to advance the innovation agendas of organizations.

Details

Journal of Business Strategy, vol. 36 no. 1
Type: Research Article
ISSN: 0275-6668

Keywords

Article
Publication date: 24 February 2020

Qianjin Zong, Lili Fan, Yafen Xie and Jingshi Huang

The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.

Abstract

Purpose

The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.

Design/methodology/approach

Papers with PPPRs from Publons.com as the experimental groups were manually matched 1:2 with the related papers without PPPR as the control group, by the same journal, the same issue (volume), the same access status (gold open access or not) and the same document type. None of the papers in the experimental group or control group received any comments or recommendations from ResearchGate, PubPeer or F1000. The polarity of the PPPRs was coded by using content analysis. A negative binomial regression analysis was conducted to examine the data by controlling the characteristics of papers.

Findings

The four experimental groups and their corresponding control groups were generated as follows: papers with neutral PPPRs, papers with both negative and positive PPPRs, papers with negative PPPRs and papers with positive PPPRs as well as four corresponding control groups (papers without PPPRs). The results are as follows: while holding the other variables (such as page count, number of authors, etc.) constant in the model, papers that received neutral PPPRs, those that received negative PPPRs and those that received both negative and positive PPPRs had no significant differences in citation count when compared to their corresponding control pairs (papers without PPPRs). Papers that received positive PPPRs had significantly greater citation count than their corresponding control pairs (papers without PPPRs) while holding the other variables (such as page count, number of authors, etc.) constant in the model.

Originality/value

Based on a broader range of PPPR sentiments, by controlling many of the confounding factors (including the characteristics of the papers and the effects of the other PPPR platforms), this study analyzed the relationship of various polarities of PPPRs to citation count.

Details

Online Information Review, vol. 44 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 July 2020

Mike Thelwall, Eleanor-Rose Papas, Zena Nyakoojo, Liz Allen and Verena Weigert

Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a…

Abstract

Purpose

Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research's open peer review publishing platform.

Design/methodology/approach

PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings.

Findings

PeerJudge can predict F1000Research judgements from negative evaluations in reviewers' comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision.

Originality/value

PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments.

Details

Online Information Review, vol. 44 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 31 December 2015

Mike Thelwall, Kayvan Kousha, Adam Dinsmore and Kevin Dolby

– The purpose of this paper is to investigate the potential of altmetric and webometric indicators to aid with funding agencies’ evaluations of their funding schemes.

4266

Abstract

Purpose

The purpose of this paper is to investigate the potential of altmetric and webometric indicators to aid with funding agencies’ evaluations of their funding schemes.

Design/methodology/approach

This paper analyses a range of altmetric and webometric indicators in terms of suitability for funding scheme evaluations, compares them to traditional indicators and reports some statistics derived from a pilot study with Wellcome Trust-associated publications.

Findings

Some alternative indicators have advantages to usefully complement scientometric data by reflecting a different type of impact or through being available before citation data.

Research limitations/implications

The empirical part of the results is based on a single case study and does not give statistical evidence for the added value of any of the indicators.

Practical implications

A few selected alternative indicators can be used by funding agencies as part of their funding scheme evaluations if they are processed in ways that enable comparisons between data sets. Their evidence value is only weak, however.

Originality/value

This is the first analysis of altmetrics or webometrics from a funding scheme evaluation perspective.

Details

Aslib Journal of Information Management, vol. 68 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 29 December 2022

Kianoosh Rashidi, Hajar Sotudeh and Alireza Nikseresht

This study aimed to investigate how the enrichment of medical documents' index terms by their comments improves the relevance and novelty of the top-ranked results retrieved by an…

Abstract

Purpose

This study aimed to investigate how the enrichment of medical documents' index terms by their comments improves the relevance and novelty of the top-ranked results retrieved by an NLP system.

Design/methodology/approach

A semi-experimental pre-test and post-test research was designed to compare NLP-based indexes before and after being expanded by the comment terms. The experiments were conducted on a test collection of 13,957 documents commented by F1000-Prime reviewers. They were indexed at title, abstract, body and full-text levels. In total, 100 seed documents were randomly selected and served as queries. The textual similarity of the documents and queries was calculated using Lucene-more-like-this function and evaluated by the semantic similarity of their MeSH. The results novelty was measured using maximal marginal relevance and evaluated by their MeSH novelties. Normalized discounted cumulative gain was used to compare the basic and expanded indexes' precisions at 10, 20 and 50 top ranks.

Findings

The relevance and novelty of the results ranked at the top precision points was improved after expanding the indexes by the comment terms. The finding implies that meta-texts are effective in representing their mother documents, by adding dynamic elements to their rather static contents. It also provides further evidence about the merits of the application of social intelligence and collective wisdom reflected in the actions and reactions of users in tackling the challenges faced by NLP-based systems.

Originality/value

This is the first study to confirm that social comments on scientific papers improve the performance of information systems in terms of relevance and novelty.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-05-2022-0283.

Details

Online Information Review, vol. 47 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 29 January 2024

Sheikh Shueb, Sumeer Gul, Aabid Hussain Kharadi, Nahida Tun Nisa and Farzana Gulzar

The study showcases the social impact (online attention) of funded research compared to nonfunded for the BRICS nations. The key themes achieving online attention across the…

Abstract

Purpose

The study showcases the social impact (online attention) of funded research compared to nonfunded for the BRICS nations. The key themes achieving online attention across the funded and nonfunded publications have also been identified.

Design/methodology/approach

A total of 1,507,931 articles published across the BRICS nations for a period of three (03) years were downloaded from the Clarivate Analytics' InCites database of Web of Science (WoS). “Funding Acknowledgement Analysis (FAA)” was used to identify the funded and nonfunded publications. The altmetric score of the top highly cited (1%) publications was gauged from the largest altmetric data provider, “Altmetric.com”, using the DOI of each publication. One-way ANOVA test was used to know the impact of funding on the mentions (altmetrics) across different data sources covered by Altmetric.com. The highly predominant keywords (hotspots) have been mapped using bibliometric software, “VOSviewer”.

Findings

The mentions across all the altmetric sources for funded research are higher compared to nonfunded research for all nations. It indicates the altmetric advantage for funded research, as funded publications are more discussed, tweeted, shared and have more readers and citations; thus, acquiring more social impact/online attention compared to nonfunded publications. The difference in means for funded and nonfunded publications varies across various altmetric sources and nations. Further, the authors’ keyword analysis reveals the prominence of the respective nation names in publications of the BRICS.

Research limitations/implications

The study showcases the utility of indexing the funding information and whether research funding increases social impact return (online attention). It presents altmetrics as an important impact assessment and evaluation framework indicator, adding one more dimension to the research performance. The linking of funding information with the altmetric score can be used to assess the online attention and multi-flavoured impact of a particular funding programme and source/agency of a nation so that necessary strategies would be framed to improve the reach and impact of funded research. It identifies countries that achieve significant online attention for their funded publications compared to nonfunded ones, along with the key themes that can be utilised to frame research and investment plans.

Originality/value

The study represents the social impact of funded research compared to nonfunded across the BRICS nations.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Content available
3031

Abstract

Details

Aslib Journal of Information Management, vol. 67 no. 3
Type: Research Article
ISSN: 2050-3806

1 – 10 of 199