Search results
1 – 10 of 28Kenning Arlitsch, Jonathan Wheeler, Minh Thi Ngoc Pham and Nikolaus Nova Parulian
This study demonstrates that aggregated data from the Repository Analytics and Metrics Portal (RAMP) have significant potential to analyze visibility and use of institutional…
Abstract
Purpose
This study demonstrates that aggregated data from the Repository Analytics and Metrics Portal (RAMP) have significant potential to analyze visibility and use of institutional repositories (IR) as well as potential factors affecting their use, including repository size, platform, content, device and global location. The RAMP dataset is unique and public.
Design/methodology/approach
The webometrics methodology was followed to aggregate and analyze use and performance data from 35 institutional repositories in seven countries that were registered with the RAMP for a five-month period in 2019. The RAMP aggregates Google Search Console (GSC) data to show IR items that surfaced in search results from all Google properties.
Findings
The analyses demonstrate large performance variances across IR as well as low overall use. The findings also show that device use affects search behavior, that different content types such as electronic thesis and dissertation (ETD) may affect use and that searches originating in the Global South show much higher use of mobile devices than in the Global North.
Research limitations/implications
The RAMP relies on GSC as its sole data source, resulting in somewhat conservative overall numbers. However, the data are also expected to be as robot free as can be hoped.
Originality/value
This may be the first analysis of aggregate use and performance data derived from a global set of IR, using an openly published dataset. RAMP data offer significant research potential with regard to quantifying and characterizing variances in the discoverability and use of IR content.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0328
Details
Keywords
Valerie Spezi, Simon Wakeling, Stephen Pinfield, Claire Creaser, Jenny Fry and Peter Willett
Open-access mega-journals (OAMJs) represent an increasingly important part of the scholarly communication landscape. OAMJs, such as PLOS ONE, are large scale, broad scope journals…
Abstract
Purpose
Open-access mega-journals (OAMJs) represent an increasingly important part of the scholarly communication landscape. OAMJs, such as PLOS ONE, are large scale, broad scope journals that operate an open access business model (normally based on article-processing charges), and which employ a novel form of peer review, focussing on scientific “soundness” and eschewing judgement of novelty or importance. The purpose of this paper is to examine the discourses relating to OAMJs, and their place within scholarly publishing, and considers attitudes towards mega-journals within the academic community.
Design/methodology/approach
This paper presents a review of the literature of OAMJs structured around four defining characteristics: scale, disciplinary scope, peer review policy, and economic model. The existing scholarly literature was augmented by searches of more informal outputs, such as blogs and e-mail discussion lists, to capture the debate in its entirety.
Findings
While the academic literature relating specifically to OAMJs is relatively sparse, discussion in other fora is detailed and animated, with debates ranging from the sustainability and ethics of the mega-journal model, to the impact of soundness-only peer review on article quality and discoverability, and the potential for OAMJs to represent a paradigm-shifting development in scholarly publishing.
Originality/value
This paper represents the first comprehensive review of the mega-journal phenomenon, drawing not only on the published academic literature, but also grey, professional and informal sources. The paper advances a number of ways in which the role of OAMJs in the scholarly communication environment can be conceptualised.
Details
Keywords
The purpose of this paper is, first, to scrutinize the determinants of key benefits of open educational resources (OER) to faculty. Second, it is to expose how, in which routines…
Abstract
Purpose
The purpose of this paper is, first, to scrutinize the determinants of key benefits of open educational resources (OER) to faculty. Second, it is to expose how, in which routines the variables involved, are interrelated.
Design/methodology/approach
An exploratory design is used in this study. Qualitatively, key benefits include integration, opportunity, efficiency, enrichment, and collaboration. These benefits have direct impacts on enhancing student learning, augmenting teaching practice, improving productivity, catalyzing changes in teaching practice, and supporting non-traditional learners. Quantitatively, the key benefit is moderating the variables. Integration, opportunity, efficiency, enrichment, and collaboration are independent variables. Variables like enhancing student learning, enriching teaching practice, improving productivity, catalyzing changes, and supporting non-traditional learners are the dependent variables. The study population comprised the 721 Universitas Terbuka (UT) faculty members. The respondents were chosen randomly by distributing 450 questionnaires. Only 203 questionnaires were completed. Importance performance analysis and customer satisfaction index (IPA-CSI) were used to measure the importance level of variables involved and their benefits. Structural equation model (SEM) was used to examine the ten hypotheses developed so that the author could understand the significance level and relations power among variables engaged with reference to the qualitative outcomes previously obtained.
Findings
Six hypotheses were validated by the analysis. Statistically, efficiency and integration affect key benefits. Likewise, moderating variables affect teaching practice enhancement, productivity improvement, catalyzing changes, and supporting non-traditional learners. Conversely, key benefits were neither interrelated by opportunity, enrichment, and collaboration nor learning enhancement.
Practical implications
This study highlighted that adoption, integration, and implementation of OER in the UT milieu do take place.
Originality/value
This study recognized the variation of qualitative vs quantitative outcomes. An auxiliary inquiry is needed with broader perspective by increasing the respondents sample in order to minimize the difference between qualitative and quantitative results.
Details
Keywords
Patiswa Zibani, Mogiveny Rajkoomar and Nalindren Naicker
This study aims to evaluate faculty research repositories used in higher education institutions, their different levels and functions with regard to research information…
Abstract
Purpose
This study aims to evaluate faculty research repositories used in higher education institutions, their different levels and functions with regard to research information management. This is revealed through the selected studies reviewed.
Design/methodology/approach
A systematic literature search of journal article studies on research repositories in higher education institutions was carried out on several databases, namely, Ebscohost, Emerald Insight, Science Direct, Sage, Google Scholar, SA e-Publications and citation databases such as Scopus and Web of Science. The systematic review was conducted in accordance with the preferred reporting items for systematic reviews and meta-analyses guidelines. The time frame for the analysis was 2015 to 2021.
Findings
The findings are presented on the motives for developing faculty research repositories the services provided and benefits derived from faculty research repositories and what is the utilization of faculty research repositories.
Originality/value
The results show that the development of research repositories at the faculty level enhances sharing, analysis, evaluation and preservation of scholarly research produced.
Details
Keywords
Roberto Linzalone, Giovanni Schiuma and Salvatore Ammirato
Studies on academic entrepreneurship (AE) agree on the significant impact that Universities can have on entrepreneurial development. AE deploys through fundamental activities…
Abstract
Purpose
Studies on academic entrepreneurship (AE) agree on the significant impact that Universities can have on entrepreneurial development. AE deploys through fundamental activities, like the start-up of new companies and the connection of the University with Enterprises. The purpose of this paper is to analyse the role of digital learning platforms (DLP) to connect Universities and Enterprises effectively. Although the literature has extensively investigated DLP, there is a lack of understanding of the role of DLP in supporting digital AE. This paper focuses, in particular, on the functional requirements that have to distinguish the development of DLPs supporting education-based activities of knowledge transfer between academia and enterprise.
Design/methodology/approach
The research is carried out, adopting a case study methodology. A single and holistic case regarding a DLP developed for the strategic and exclusive deployment of AE activities is proposed to describe and discuss the functional requirements of such Platform.
Findings
The DLP is a virtual learning space in which Enterprises and Universities can interact. The definition of design requirements is crucial for the efficacy of DLPs and needs to be carefully supported. Various criteria are proposed, respect to the various stakeholders engaged in DAE learning platform (Universities, Enterprises, students, employees), and according to the short- and long-term objectives of Universities and Entrepreneurship connection.
Originality/value
The paper explores an original case of DLP established in AE, to connect Universities and Enterprises. The research also sheds light on the under focussed typology of AE activities regarding education-based knowledge exchange. They are currently unaddressed by the literature on AE.
Details
Keywords
This chapter analyses the Norwegian Twitter-sphere during and in the aftermath of the terrorist attack in Norway on 22 July 2011. Based on a collection of 2.2 million tweets…
Abstract
This chapter analyses the Norwegian Twitter-sphere during and in the aftermath of the terrorist attack in Norway on 22 July 2011. Based on a collection of 2.2 million tweets representing the Twitter-sphere during the period 20 July–28 August 2011, the chapter seeks answers to how the micro-blogging services aided in creating situation awareness (SA) related to the emergency event, what role hashtags played in that process and who the dominant crisis communicators were. The chapter is framed by theories and previous research on SA and social media use in the context of emergency events. The findings reveal that Twitter was important in establishing SA both during and in the aftermath of the terrorist attack, that hashtags were of limited value in this process during the critical phase, and that unexpected actors became key communicators.
Details
Keywords
Justyna Bandola-Gill, Sotiria Grek and Matteo Ronzani
The visualization of ranking information in global public policy is moving away from traditional “league table” formats and toward dashboards and interactive data displays. This…
Abstract
The visualization of ranking information in global public policy is moving away from traditional “league table” formats and toward dashboards and interactive data displays. This paper explores the rhetoric underpinning the visualization of ranking information in such interactive formats, the purpose of which is to encourage country participation in reporting on the Sustainable Development Goals. The paper unpacks the strategies that the visualization experts adopt in the measurement of global poverty and wellbeing, focusing on a variety of interactive ranking visualizations produced by the OECD, the World Bank, the Gates Foundation and the ‘Our World in Data’ group at the University of Oxford. Building on visual and discourse analysis, the study details how the politically and ethically sensitive nature of global public policy, coupled with the pressures for “decolonizing” development, influence how rankings are visualized. The study makes two contributions to the literature on rankings. First, it details the move away from league table formats toward multivocal interactive layouts that seek to mitigate the competitive and potentially dysfunctional pressures of the display of “winners and losers.” Second, it theorizes ranking visualizations in global public policy as “alignment devices” that entice country buy-in and seek to align actors around common global agendas.
Details
Keywords
Julia Slupska and Leonie Maria Tanczer
Technology-facilitated abuse, so-called “tech abuse,” through phones, trackers, and other emerging innovations, has a substantial impact on the nature of intimate partner violence…
Abstract
Technology-facilitated abuse, so-called “tech abuse,” through phones, trackers, and other emerging innovations, has a substantial impact on the nature of intimate partner violence (IPV). The current chapter examines the risks and harms posed to IPV victims/survivors from the burgeoning Internet of Things (IoT) environment. IoT systems are understood as “smart” devices such as conventional household appliances that are connected to the internet. Interdependencies between different products together with the devices' enhanced functionalities offer opportunities for coercion and control. Across the chapter, we use the example of IoT to showcase how and why tech abuse is a socio-technological issue and requires not only human-centered (i.e., societal) but also cybersecurity (i.e., technical) responses. We apply the method of “threat modeling,” which is a process used to investigate potential cybersecurity attacks, to shift the conventional technical focus from the risks to systems toward risks to people. Through the analysis of a smart lock, we highlight insufficiently designed IoT privacy and security features and uncover how seemingly neutral design decisions can constrain, shape, and facilitate coercive and controlling behaviors.
Details
Keywords
Sara Lafia, David A. Bleckley and J. Trent Alexander
Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use…
Abstract
Purpose
Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use. Digitization transforms paper-based collections into more accessible and analyzable formats. As collections are digitized, there is an opportunity to incorporate deep learning techniques, such as Document Image Analysis (DIA), into workflows to increase the usability of information extracted from archival documents. This paper describes the authors' approach using digital scanning, optical character recognition (OCR) and deep learning to create a digital archive of administrative records related to the mortgage guarantee program of the Servicemen's Readjustment Act of 1944, also known as the G.I. Bill.
Design/methodology/approach
The authors used a collection of 25,744 semi-structured paper-based records from the administration of G.I. Bill Mortgages from 1946 to 1954 to develop a digitization and processing workflow. These records include the name and city of the mortgagor, the amount of the mortgage, the location of the Reconstruction Finance Corporation agent, one or more identification numbers and the name and location of the bank handling the loan. The authors extracted structured information from these scanned historical records in order to create a tabular data file and link them to other authoritative individual-level data sources.
Findings
The authors compared the flexible character accuracy of five OCR methods. The authors then compared the character error rate (CER) of three text extraction approaches (regular expressions, DIA and named entity recognition (NER)). The authors were able to obtain the highest quality structured text output using DIA with the Layout Parser toolkit by post-processing with regular expressions. Through this project, the authors demonstrate how DIA can improve the digitization of administrative records to automatically produce a structured data resource for researchers and the public.
Originality/value
The authors' workflow is readily transferable to other archival digitization projects. Through the use of digital scanning, OCR and DIA processes, the authors created the first digital microdata file of administrative records related to the G.I. Bill mortgage guarantee program available to researchers and the general public. These records offer research insights into the lives of veterans who benefited from loans, the impacts on the communities built by the loans and the institutions that implemented them.
Details