Search results
1 – 10 of over 51000Lin He and Zhengbiao Han
The purpose of this paper is to evaluate the impact of scientific data in order to assess the reliability of data to support data curation, to establish trust between researchers…
Abstract
Purpose
The purpose of this paper is to evaluate the impact of scientific data in order to assess the reliability of data to support data curation, to establish trust between researchers to support reuse of digital data and encourage researchers to share more data.
Design/methodology/approach
The authors compared the correlations between usage counts of associated data in Dryad and citation counts of articles in Web of Science in different subject areas in order to assess the possibility of using altmetric indicators to evaluate scientific data.
Findings
There are high positive correlations between usage counts of data and citation counts of associated articles. The citation counts of article’s shared data are higher than the average citation counts in most of the subject areas examined by the authors.
Practical implications
The paper suggests that usage counts of data could be potentially used to evaluate scholarly impact of scientific data, especially for those subject areas without special data repositories.
Originality/value
The study examines the possibility to use usage counts to evaluate the impact of scientific data in a generic repository Dryad by different subject categories.
Details
Keywords
Dominika Kalinowska and Jean-Loup Madre
Across Europe, on average more than 95% of all passenger cars and half of all light commercial vehicles are permanently available to a household. This includes both privately…
Abstract
Across Europe, on average more than 95% of all passenger cars and half of all light commercial vehicles are permanently available to a household. This includes both privately owned vehicles and company cars. The profiles of vehicle use can be specified as average annual distance driven per vehicle and for the fleet as a total, purpose of travel (trip destination), infrastructure use (urban, interurban or motorway road transport) and also fuel consumption together with data on CO2 emissions. Indicators on vehicle use can be tracked in various ways:
self-administered panels of households, which permit their vehicles to be followed for several years;
national or local household travel surveys (with a seven-day trip diary);
official vehicle inspection and vehicle registration files;
‘vehicle surveys’ based on vehicle registry data;
traffic counts;
data collected for road-charging purposes.
self-administered panels of households, which permit their vehicles to be followed for several years;
national or local household travel surveys (with a seven-day trip diary);
official vehicle inspection and vehicle registration files;
‘vehicle surveys’ based on vehicle registry data;
traffic counts;
data collected for road-charging purposes.
The paper will present a review of mainly vehicle-based survey methods used in France, Germany, Finland, the United Kingdom, the United States and Canada, describing existing sampling frames to their scope, advantages and limitations, as well as their costs. Issues addressed in this context will be further examined in terms of their methodological challenges as well as their purpose.
The leading questions underlying this paper as well as the corresponding workshop are: why is it necessary to have data on passenger travel or transportation; and, looking at international experience, how good are vehicle-based surveys in delivering the required information? In discussing problems experienced in the different countries with data collection and evaluation methods, emphasis will be put on potential strategies for methodological and technological improvement and problem solving. One example is the potential use, benefits and constraints of new survey technologies presented by vehicle tracking techniques.
Maruscia Baklizky, Marcelo Fantinato, Lucineia Heloisa Thom, Violeta Sun and Patrick C.K. Hung
The purpose of this paper is to present business process point analysis (BPPA), a technique to measure business functional process size, based on function point analysis (FPA)…
Abstract
Purpose
The purpose of this paper is to present business process point analysis (BPPA), a technique to measure business functional process size, based on function point analysis (FPA), and using business process model and notation (BPMN). This paper also discusses the assessment results of BPPA compared with FPA.
Design/methodology/approach
Two experimental studies with participants from academia and industry were conducted. The following aspects in the experimental studies were focused: similarity, application easiness, feasibility, and application benefits. The purpose of the experiment was to assess BPPA comparing with FPA as the BPPA design followed the FPA pattern.
Findings
Experimental results showed that both academia and industry groups highly rated similarity and application benefits for BPPA compared with FPA. However, only participants from industry highly rated BPPA for application easiness and feasibility. The results also showed that participants’ previous experiences did not influence their ratings on BPPA.
Originality/value
BPPA helps project managers to measure functional process size of business process management projects. As BPPA is derived from FPA, its mechanism is easily recognizable by project managers who are used to FPA. These results also show that both techniques are in most cases considered rather similar.
Details
Keywords
The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In…
Abstract
The challenge of truckload routing is increased in complexity by the introduction of stochastic demand. Typically, this demand is generalized to follow a Poisson distribution. In this chapter, we cluster the demand data using data mining techniques to establish the more acceptable distribution to predict demand. We then examine this stochastic truckload demand using an econometric discrete choice model known as a count data model. Using actual truckload demand data and data from the bureau of transportation statistics, we perform count data regressions. Two outcomes are produced from every regression run, the predicted demand between every origin and destination, and the likelihood that that demand will occur. The two allow us to generate an expected value forecast of truckload demand as input to a truckload routing formulation. The negative binomial distribution produces an improved forecast over the Poisson distribution.
The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the preferences and…
Abstract
The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the preferences and ideology of the FTC’s leaders, developments in the field of economics, and the tenor of the times. The over-riding current role is to provide well considered, unbiased economic advice regarding antitrust and consumer protection law enforcement cases to the legal staff and the Commission. The second role, which long ago was primary, is to provide reports on investigations of various industries to the public and public officials. This role was more recently called research or “policy R&D”. A third role is to advocate for competition and markets both domestically and internationally. As a practical matter, the provision of economic advice to the FTC and to the legal staff has required that the economists wear “two hats,” helping the legal staff investigate cases and provide evidence to support law enforcement cases while also providing advice to the legal bureaus and to the Commission on which cases to pursue (thus providing “a second set of eyes” to evaluate cases). There is sometimes a tension in those functions because building a case is not the same as evaluating a case. Economists and the Bureau of Economics have provided such services to the FTC for over 100 years proving that a sub-organization can survive while playing roles that sometimes conflict. Such a life is not, however, always easy or fun.
Details
Keywords
Erik W. Johnson, Jonathan P. Schreiner and Jon Agnone
We know a great deal about the ways in which routines of news coverage may bias newspaper content, but little about how different article retrieval practices influence newspaper…
Abstract
We know a great deal about the ways in which routines of news coverage may bias newspaper content, but little about how different article retrieval practices influence newspaper data assembled by scholars. Using the New York Times as a source of data on social movement activity, we compare depictions of protest by the African-American Civil Rights movement over time produced using the two most common article retrieval methods: index versus full-story coding. Full-story coding clearly offers more depth and greater breadth in terms of the events identified. Moreover, many of the same event characteristics associated with selection bias in newspaper reporting (e.g., size and confrontational nature of a protest event, presence of counter-demonstrators or police, and event sponsorship by a recognized social movement organization) are selected upon again when stories are indexed by New York Times staff.
Details
Keywords
Qianjin Zong, Lili Fan, Yafen Xie and Jingshi Huang
The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.
Abstract
Purpose
The purpose of this study is to investigate the relationship of the post-publication peer review (PPPR) polarity of a paper to that paper's citation count.
Design/methodology/approach
Papers with PPPRs from Publons.com as the experimental groups were manually matched 1:2 with the related papers without PPPR as the control group, by the same journal, the same issue (volume), the same access status (gold open access or not) and the same document type. None of the papers in the experimental group or control group received any comments or recommendations from ResearchGate, PubPeer or F1000. The polarity of the PPPRs was coded by using content analysis. A negative binomial regression analysis was conducted to examine the data by controlling the characteristics of papers.
Findings
The four experimental groups and their corresponding control groups were generated as follows: papers with neutral PPPRs, papers with both negative and positive PPPRs, papers with negative PPPRs and papers with positive PPPRs as well as four corresponding control groups (papers without PPPRs). The results are as follows: while holding the other variables (such as page count, number of authors, etc.) constant in the model, papers that received neutral PPPRs, those that received negative PPPRs and those that received both negative and positive PPPRs had no significant differences in citation count when compared to their corresponding control pairs (papers without PPPRs). Papers that received positive PPPRs had significantly greater citation count than their corresponding control pairs (papers without PPPRs) while holding the other variables (such as page count, number of authors, etc.) constant in the model.
Originality/value
Based on a broader range of PPPR sentiments, by controlling many of the confounding factors (including the characteristics of the papers and the effects of the other PPPR platforms), this study analyzed the relationship of various polarities of PPPRs to citation count.
Details
Keywords
Nushrat Khan, Mike Thelwall and Kayvan Kousha
The purpose of this study is to explore current practices, challenges and technological needs of different data repositories.
Abstract
Purpose
The purpose of this study is to explore current practices, challenges and technological needs of different data repositories.
Design/methodology/approach
An online survey was designed for data repository managers, and contact information from the re3data, a data repository registry, was collected to disseminate the survey.
Findings
In total, 189 responses were received, including 47% discipline specific and 34% institutional data repositories. A total of 71% of the repositories reporting their software used bespoke technical frameworks, with DSpace, EPrint and Dataverse being commonly used by institutional repositories. Of repository managers, 32% reported tracking secondary data reuse while 50% would like to. Among data reuse metrics, citation counts were considered extremely important by the majority, followed by links to the data from other websites and download counts. Despite their perceived usefulness, repository managers struggle to track dataset citations. Most repository managers support dataset and metadata quality checks via librarians, subject specialists or information professionals. A lack of engagement from users and a lack of human resources are the top two challenges, and outreach is the most common motivator mentioned by repositories across all groups. Ensuring findable, accessible, interoperable and reusable (FAIR) data (49%), providing user support for research (36%) and developing best practices (29%) are the top three priorities for repository managers. The main recommendations for future repository systems are as follows: integration and interoperability between data and systems (30%), better research data management (RDM) tools (19%), tools that allow computation without downloading datasets (16%) and automated systems (16%).
Originality/value
This study identifies the current challenges and needs for improving data repository functionalities and user experiences.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2021-0204
Details
Keywords
Niyati Aggrawal, Anuja Arora, Adarsh Anand and Yogesh Dwivedi
The purpose of this study/paper is to propose a mathematical model that is able to predict the future popularity based on the view count of a particular YouTube video. Since the…
Abstract
Purpose
The purpose of this study/paper is to propose a mathematical model that is able to predict the future popularity based on the view count of a particular YouTube video. Since the emergence of video-sharing sites from early 2005, YouTube has been pioneering in its performance and holds the largest share of internet traffic. YouTube plays a significant role in popularizing information on social network. For all social media sites, viewership is an important and vital component to measure diffusion on a video-sharing site, which is defined in terms of the number of view counts. In the era of social media marketing, companies demand an efficient system that can predict the popularity of video in advance. Diffusion prediction of video can help marketing firms and brand companies to inflate traffic and help the firms in generating revenue.
Design/methodology/approach
In the present work, viewership is studied as an important diffusion-affecting parameter pertaining to YouTube videos. Primarily, a mathematical diffusion model is proposed to predict YouTube video diffusion based on the varying situations of viewership. The proposal segregates the total number of viewers into two classes – neoterics viewers, i.e. viewers those viewing a video on a direct basis, and followers, i.e. viewers those watching under the influence.
Findings
The approach is supplemented with numerical illustration done on the real YouTube data set. Results prove that the proposed approach contributes significantly to predict viewership of video. The proposed model brings predicted viewership and its classification highly close to the true value.
Originality/value
Thereby, a behavioral rationale for the modeling and quantification is offered in terms of the two varied and yet connected classes of viewers – “neoterics” and “followers.”
Details