Search results

1 – 10 of 61
To view the access options for this content please click here
Article
Publication date: 10 July 2017

Ann E. Williams

The purpose of this paper is to provide an overview and evaluation of F1000, a publishing outlet and peer review system for research in the biomedical and life sciences.

Abstract

Purpose

The purpose of this paper is to provide an overview and evaluation of F1000, a publishing outlet and peer review system for research in the biomedical and life sciences.

Design/methodology/approach

The review chronicles the rise of F1000 and describes the site’s functionalities and use capabilities.

Findings

The findings detail both the strengths and limitations of F1000 and point toward avenues for continued research and development.

Originality/value

This is the first review to provide a substantive evaluation of F1000 for academics to consider when adopting, using and researching the platform.

Details

Information and Learning Science, vol. 118 no. 7/8
Type: Research Article
ISSN: 2398-5348

Keywords

To view the access options for this content please click here
Article
Publication date: 18 May 2015

Lutz Bornmann

– The purpose of this case study is to investigate the usefulness of altmetrics for measuring the broader impact of research.

Downloads
1162

Abstract

Purpose

The purpose of this case study is to investigate the usefulness of altmetrics for measuring the broader impact of research.

Design/methodology/approach

This case study is based on a sample of 1,082 the Public Library of Science (PLOS) journal articles recommended in F1000. The data set includes altmetrics which were provided by PLOS. The F1000 data set contains tags on papers which were assigned by experts to characterise them.

Findings

The most relevant tag for altmetric research is “good for teaching”, as it is assigned to papers which could be of interest to a wider circle of readers than the peers in a specialised area. One could expect papers with this tag to be mentioned more often on Facebook and Twitter than those without this tag. The results from regression models were able to confirm these expectations: papers with this tag show significantly higher Facebook and Twitter counts than papers without this tag. This clear association could not be seen with Mendeley or Figshare counts (that is with counts from platforms which are chiefly of interest in a scientific context).

Originality/value

The results of the current study indicate that Facebook and Twitter, but not Figshare or Mendeley, might provide an indication of which papers are of interest to a broader circle of readers (and not only for the peers in a specialist area), and could therefore be useful for the measurement of the societal impact of research.

Details

Aslib Journal of Information Management, vol. 67 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

To view the access options for this content please click here
Book part
Publication date: 12 June 2015

Samir Hachani

Peer review has been with humans for a long time. Its effective inception dates back to World War II resulting information overload, which imposed a quantitative and…

Abstract

Peer review has been with humans for a long time. Its effective inception dates back to World War II resulting information overload, which imposed a quantitative and qualitative screening of publications. Peer review was beset by a number of accusations and critics largely from the biases and subjective aspects of the process including the secrecy in which the processes became standard. Advent of the Internet in the early 1990s provided a manner to open peer review up to make it more transparent, less iniquitous, and more objective. This chapter investigates whether this openness led to a more objective manner of judging scientific publications. Three sites are examined: Electronic Transactions on Artificial Intelligence (ETAI), Atmospheric Chemistry and Physics (ACP), and Faculty of 1000 (F1000). These sites practice open peer review wherein reviewers and authors and their reviews and rebuttals are available for all to see. The chapter examines the different steps taken to allow reviewers and authors to interact and how this allows for the entire community to participate. This new prepublication reviewing of papers has to some extent, alleviated the biases that were previously preponderant and, furthermore, seems to give positive results and feedback. Although recent, experiences seem to have elicited scientists’ acceptance because openness allows for a more objective and fair judgment of research and scholarship. Yet, it will undoubtedly lead to new questions which are examined in this chapter.

Details

Current Issues in Libraries, Information Science and Related Fields
Type: Book
ISBN: 978-1-78441-637-9

Keywords

To view the access options for this content please click here
Article
Publication date: 19 January 2015

C. Brooke Dobni, Mark Klassen and W. Thomas Nelson

The USA is the world’s largest economy, but is it a leading innovation nation? As economies mature and slow in growth, innovation will prove to be a key driver in…

Downloads
2040

Abstract

Purpose

The USA is the world’s largest economy, but is it a leading innovation nation? As economies mature and slow in growth, innovation will prove to be a key driver in maintaining transient advantage. This article presents a pulse on innovation in the USA as F1000 C-suite executives weigh in on their organization’s innovation health. It also compares the US score with proxy benchmark measures in other countries, and provides operational and strategic considerations to advance innovation platforms in US organizations. Managers will gain insight into common hurdles faced by some of America’s most prominent companies, as well as how to improve innovation practices in their own organization.

Design/methodology/approach

This current article reports on findings of innovation health in the USA based on responses from 1,127 F1000 executives (manager level and higher). F1000 executives report their innovation culture through completion of an innovation culture model survey developed by the authors. The F1000 is a listing created by Fortune magazine detailing the 1,000 largest companies in the USA based on revenues. This survey is considered one of the largest surveys on innovation culture in the USA to date.

Findings

One of the leading questions that this survey set out to answer is the current measure of innovation orientation amongst America’s largest organizations. Our findings suggest that US business is just beginning to catch the wave of innovation. Other major findings include: innovation amongst the F1000 is average at best; innovation is random and incremental; innovation strategy is missing in most organizations; there is an executive/employee innovation perception gap; innovation governance is missing; employees can not be blamed for a lack of innovation; and companies that fail to innovate will struggle even more.

Practical implications

There are a number of operational and strategic considerations presented to support the advancement of innovation in organizations. These include considerations around the leadership, resources, knowledge management and execution to strategically support innovation.

Originality/value

This is an original contribution in that it uses a scientifically developed model to measure innovation culture. It is the largest survey of innovation to date amongst the US Fortune 1000, and the finding present considerations to advance the innovation agendas of organizations.

Details

Journal of Business Strategy, vol. 36 no. 1
Type: Research Article
ISSN: 0275-6668

Keywords

To view the access options for this content please click here
Article
Publication date: 24 February 2020

Qianjin Zong, Lili Fan, Yafen Xie and Jingshi Huang

The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.

Abstract

Purpose

The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.

Design/methodology/approach

Papers with PPPRs from Publons.com as the experimental groups were manually matched 1:2 with the related papers without PPPR as the control group, by the same journal, the same issue (volume), the same access status (gold open access or not) and the same document type. None of the papers in the experimental group or control group received any comments or recommendations from ResearchGate, PubPeer or F1000. The polarity of the PPPRs was coded by using content analysis. A negative binomial regression analysis was conducted to examine the data by controlling the characteristics of papers.

Findings

The four experimental groups and their corresponding control groups were generated as follows: papers with neutral PPPRs, papers with both negative and positive PPPRs, papers with negative PPPRs and papers with positive PPPRs as well as four corresponding control groups (papers without PPPRs). The results are as follows: while holding the other variables (such as page count, number of authors, etc.) constant in the model, papers that received neutral PPPRs, those that received negative PPPRs and those that received both negative and positive PPPRs had no significant differences in citation count when compared to their corresponding control pairs (papers without PPPRs). Papers that received positive PPPRs had significantly greater citation count than their corresponding control pairs (papers without PPPRs) while holding the other variables (such as page count, number of authors, etc.) constant in the model.

Originality/value

Based on a broader range of PPPR sentiments, by controlling many of the confounding factors (including the characteristics of the papers and the effects of the other PPPR platforms), this study analyzed the relationship of various polarities of PPPRs to citation count.

Details

Online Information Review, vol. 44 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 1 July 2020

Mike Thelwall, Eleanor-Rose Papas, Zena Nyakoojo, Liz Allen and Verena Weigert

Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article…

Abstract

Purpose

Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research's open peer review publishing platform.

Design/methodology/approach

PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings.

Findings

PeerJudge can predict F1000Research judgements from negative evaluations in reviewers' comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision.

Originality/value

PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments.

Details

Online Information Review, vol. 44 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Content available
Article
Publication date: 31 December 2015

Mike Thelwall, Kayvan Kousha, Adam Dinsmore and Kevin Dolby

– The purpose of this paper is to investigate the potential of altmetric and webometric indicators to aid with funding agencies’ evaluations of their funding schemes.

Downloads
3614

Abstract

Purpose

The purpose of this paper is to investigate the potential of altmetric and webometric indicators to aid with funding agencies’ evaluations of their funding schemes.

Design/methodology/approach

This paper analyses a range of altmetric and webometric indicators in terms of suitability for funding scheme evaluations, compares them to traditional indicators and reports some statistics derived from a pilot study with Wellcome Trust-associated publications.

Findings

Some alternative indicators have advantages to usefully complement scientometric data by reflecting a different type of impact or through being available before citation data.

Research limitations/implications

The empirical part of the results is based on a single case study and does not give statistical evidence for the added value of any of the indicators.

Practical implications

A few selected alternative indicators can be used by funding agencies as part of their funding scheme evaluations if they are processed in ways that enable comparisons between data sets. Their evidence value is only weak, however.

Originality/value

This is the first analysis of altmetrics or webometrics from a funding scheme evaluation perspective.

Details

Aslib Journal of Information Management, vol. 68 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

Content available
Downloads
2650

Abstract

Details

Aslib Journal of Information Management, vol. 67 no. 3
Type: Research Article
ISSN: 2050-3806

To view the access options for this content please click here
Article
Publication date: 29 November 2018

Jose Luis Ortega

The purpose of this paper is to analyse the metrics provided by Publons about the scoring of publications and their relationship with impact measurements (bibliometric and…

Downloads
2344

Abstract

Purpose

The purpose of this paper is to analyse the metrics provided by Publons about the scoring of publications and their relationship with impact measurements (bibliometric and altmetric indicators).

Design/methodology/approach

In January 2018, 45,819 research articles were extracted from Publons, including all their metrics (scores, number of pre and post reviews, reviewers, etc.). Using the DOI identifier, other metrics from altmetric providers were gathered to compare the scores of those publications in Publons with their bibliometric and altmetric impact in PlumX, Altmetric.com and Crossref Event Data.

Findings

The results show that: there are important biases in the coverage of Publons according to disciplines and publishers; metrics from Publons present several problems as research evaluation indicators; and correlations between bibliometric and altmetric counts and the Publons metrics are very weak (r<0.2) and not significant.

Originality/value

This is the first study about the Publons metrics at article level and their relationship with other quantitative measures such as bibliometric and altmetric indicators.

Details

Aslib Journal of Information Management, vol. 71 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

To view the access options for this content please click here
Book part
Publication date: 12 June 2015

Abstract

Details

Current Issues in Libraries, Information Science and Related Fields
Type: Book
ISBN: 978-1-78441-637-9

1 – 10 of 61