Search results
1 – 10 of over 9000Sharon Ince, Christopher Hoadley and Paul A. Kirschner
This paper is a qualitative study of how social sciences faculty construct their research workflows with the help of technological tools. The purpose of this study is to examine…
Abstract
Purpose
This paper is a qualitative study of how social sciences faculty construct their research workflows with the help of technological tools. The purpose of this study is to examine faculty scholarly workflows and how both tools and practices support the research process. This paper could inform academic libraries on how to support scholars throughout the research process.
Design/methodology/approach
This is a qualitative study case study of ten faculty members from six research universities from the United States and Canada. Semi-structured interviews were conducted and recorded. Atlas.ti was used to code and analyze the transcripts; each participant was a separate case. Descriptive coding was used to identify digital tools used for collaboration; process and descriptive coding was utilized to examine practices in scholarly workflows.
Findings
Through case study analysis the results of this study include the role of technology in faculty research workflows. Each workflow was grouped into four categories: information literacy, information management, knowledge management, and scholarly communication. The findings included scholars creating simple workflows for efficiency and collaboration and utilizing workarounds.
Research limitations/implications
The study did not observe faculty in the process of doing research and, thus, only reports on what the researchers say that they do.
Originality/value
The research is unique in that there is almost no research on how social scientists conduct their research workflows and the affordances/impasses of this process.
Details
Keywords
Richard Cull and Tillal Eldabi
The increase in business process management projects in the past decade has seen an increase in demand for business process modelling (BPM) techniques. A rapidly growing aspect of…
Abstract
Purpose
The increase in business process management projects in the past decade has seen an increase in demand for business process modelling (BPM) techniques. A rapidly growing aspect of BPM is the use of workflow management systems to automate routine and sequential processes. Workflows tend to move away from traditional definitions of business processes that can often be forced to fit a model that does not suit its nature. Existing process modelling tools tend to be biased to either the informational, behavioural or object‐oriented aspect of the workflow. Because of this, models can often miss important aspects of a workflow. As well as managing the relationship between the types of model it is important to consider who will be using it, as process models are useful in various ways. The paper aims to address these issues.
Design/methodology/approach
This paper reports on a case study in a manufacturing company, where users were surveyed to see which are the notations that are most common in modelling based on two main categories (i.e. behavioural and informational).
Findings
The research outcomes showed that there is no prevailing set of standards used for either of these categories, while most users feel the need to use more than one approach to model their system at any given time. Many companies face problems when trying to model the behaviour of human workers in the business processes. Existing techniques are mostly designed at modelling information systems or business processes, and rarely attempt to integrate the two.
Originality/value
This paper proposes the use of a new hybrid modelling methodology, which is an original idea, on existing tools and methodologies. Key authors in the literature recommend against the trend of developing a brand new methodology, so existing tools from each end of the scale were combined to provide a solution which is capable of modelling IS and BP.
Tobias Blanke, Michael Bryant and Mark Hedges
This paper aims to present an evaluation of open source OCR for supporting research on material in small‐ to medium‐scale historical archives.
Abstract
Purpose
This paper aims to present an evaluation of open source OCR for supporting research on material in small‐ to medium‐scale historical archives.
Design/methodology/approach
The approach was to develop a workflow engine to support the easy customisation of the OCR process towards the historical materials using open source technologies. Commercial OCR often fails to deliver sufficient results here, as their processing is optimised towards large‐scale commercially relevant collections. The approach presented here allows users to combine the most effective parts of different OCR tools.
Findings
The authors demonstrate their application and its flexibility and present two case studies, which demonstrate how OCR can be embedded into wider digitally enabled historical research. The first case study produces high‐quality research‐oriented digitisation outputs, utilizing services that the authors developed to allow for the direct linkage of digitisation image and OCR output. The second case study demonstrates what becomes possible if OCR can be customised directly within a larger research infrastructure for history. In such a scenario, further semantics can be added easily to the workflow, enhancing the research browse experience significantly.
Originality/value
There has been little work on the use of open source OCR technologies for historical research. This paper demonstrates that the authors' workflow approach allows users to combine commercial engines' ability to read a wider range of character sets with the flexibility of open source tools in terms of customisable pre‐processing and layout analysis. All this can be done without the need to develop dedicated code.
Details
Keywords
Sharon Ince, Christopher Hoadley and Paul A. Kirschner
This paper aims to review current literature pertaining to information literacy and digital literacy skills and practices within the research workflow for doctoral students and…
Abstract
Purpose
This paper aims to review current literature pertaining to information literacy and digital literacy skills and practices within the research workflow for doctoral students and makes recommendations for how libraries (and others) can foster skill-sets for graduate student research workflows for the twenty-first century scholarly researcher.
Design/methodology/approach
A review of existing information literacy practices for doctoral students was conducted, and four key areas of knowledge were identified and discussed.
Findings
The findings validate the need for graduate students to have training in information literacy, information management, knowledge management and scholarly communication. It recommends empirical studies to be conducted to inform future practices for doctoral students.
Practical implications
This paper offers four areas of training to be considered by librarians and faculty advisers to better prepare scholars for their future.
Originality/value
This paper presents a distinctive synthesis of the types of information literacy and digital literacy skills needed by graduate students.
Details
Keywords
Concurrency is a desirable property that enhances workflow efficiency. The purpose of this paper is to propose six polynomial-time algorithms that collectively maximize control…
Abstract
Purpose
Concurrency is a desirable property that enhances workflow efficiency. The purpose of this paper is to propose six polynomial-time algorithms that collectively maximize control flow concurrency for Business Process Model and Notation (BPMN) workflow models. The proposed algorithms perform model-level transformations on a BPMN model during the design phase of the model, thereby improving the workflow model’s execution efficiency.
Design/methodology/approach
The approach is similar to source code optimization, which solely works with syntactic means. The first step makes implicit synchronizations of interdependent concurrent control flows explicit by adding parallel gateways. After that, every control flow can proceed asynchronously. The next step then generates an equivalent sequence of execution hierarchies for every control flow such that they collectively provide maximum concurrency for the control flow. As a whole, the proposed algorithms add a valuable feature to a BPMN modeling tool to maximize control flow concurrency.
Findings
In addition, this paper introduces the concept of control flow independence, which is a user-determined semantic property of BPMN models that cannot be obtained by any syntactic means. But, if control flow independence holds in a BPMN model, the model’s determinism is guaranteed. As a result, the proposed algorithms output a model that can be proved to be equivalent to the original model.
Originality/value
This paper adds value to BPMN modeling tools by providing polynomial-time algorithms that collectively maximize control flow concurrency in a BPMN model during the design phase of the model. As a result, the model’s execution efficiency will increase. Similar to source code optimization, these algorithms perform model-level transformations on a BPMN model through syntactic means; and the transformations performed to each control flow are guaranteed to be equivalent to the control flow. Furthermore, a case study on a real-life new employee preparation process is provided to demonstrate the proposed algorithms’ usefulness on increasing the process’s execution efficiency.
Details
Keywords
Johann Van Wyk, Theo Bothma and Marlene Holmner
The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South…
Abstract
Purpose
The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South African university.
Design/methodology/approach
The research design of this article consists of empirical and non-empirical research. The non-empirical part consists of a critical literature review to synthesise the strengths, weaknesses (limitations) and omissions of identified VRE models as found in literature to develop a conceptual VRE model. As part of the critical literature review concepts were clarified and possible applications of VREs in research lifecycles and research data lifecycles were explored. The empirical part focused on the practical application of this model. This part of the article follows an interpretivist paradigm, and a qualitative research approach, using case studies as inquiry method. Case studies with a positivist perspective were selected through purposive sampling, and inferences were drawn from the sample to design and test a conceptual VRE model, and to investigate the management of research data through a VRE. Investigation was done through a process of participatory action research (PAR) and included semi-structured interviews and participant observation data collection techniques. Evaluation of findings was done through formative and summative evaluation.
Findings
The article presents a VRE conceptual model, with identified generic component layers and components that could potentially be applied and used in different research settings/disciplines. The article also reveals the role that VREs play in the successful management of research data throughout the research lifecycle. Guidelines for setting up a conceptual VRE model are offered.
Practical implications
This article assisted in clarifying and validating the various components of a conceptual VRE model that could be used in different research settings and disciplines for research data management.
Originality/value
This article confirms/validates generic layers and components that would be needed in a VRE by synthesising these in a conceptual model in the context of a research lifecycle and presents guidelines for setting up a conceptual VRE model.
Details
Keywords
This paper aims to present findings from a survey that aimed to identify the issues around the use and linkage of source and output repositories and the chemistry researchers'…
Abstract
Purpose
This paper aims to present findings from a survey that aimed to identify the issues around the use and linkage of source and output repositories and the chemistry researchers' expectations about their use.
Design/methodology/approach
This survey was performed by means of an online questionnaire and structured interviews with academic and research staff in the field of chemistry. A total of 38 people took part in the online questionnaire survey and 17 participated in face‐to‐face interviews, accounting for 55 responses in total.
Findings
Members of academic and research staff in chemistry from institutions in the UK were, in general, favourably disposed towards the idea of linking research data and published research outputs, believing that this facility would be either a significant advantage or useful for the research conducted in the domain. Further information about the nature of the research that they conduct, the type of data that they produce, the sharing and availability of research data and the use and expectations of source and output repositories is also discussed.
Research limitations/implications
Interpretation of the results must recognise that the majority of the interviewees worked in the area of theoretical/computational chemistry and therefore their views may not be representative of other chemistry research fields.
Originality/value
Such data was essential for the business analysis that described the functional requirements for the development of the key deliverable of the source‐to‐output repositories (StORe) project, the pilot middleware, which aimed to facilitate and demonstrate the bi‐directional links between source and output repositories. It also enabled the identification of workflows in research practice and contributed to the prime aspiration of the StORe project which was to invest new value to the intellectual products of academic research.
Details
Keywords
Andrew Martin Cox and Winnie Wan Ting Tam
Visualisations of research and research-related activities including research data management (RDM) as a lifecycle have proliferated in the last decade. The purpose of this paper…
Abstract
Purpose
Visualisations of research and research-related activities including research data management (RDM) as a lifecycle have proliferated in the last decade. The purpose of this paper is to offer a systematic analysis and critique of such models.
Design/methodology/approach
A framework for analysis synthesised from the literature presented and applied to nine examples.
Findings
The strengths of the lifecycle representation are to clarify stages in research and to capture key features of project-based research. Nevertheless, their weakness is that they typically mask various aspects of the complexity of research, constructing it as highly purposive, serial, uni-directional and occurring in a somewhat closed system. Other types of models such as spiral of knowledge creation or the data journey reveal other stories about research. It is suggested that we need to develop other metaphors and visualisations around research.
Research limitations/implications
The paper explores the strengths and weaknesses of the popular lifecycle model for research and RDM, and also considers alternative ways of representing them.
Practical implications
Librarians use lifecycle models to explain service offerings to users so the analysis will help them identify clearly the best type of representation for particular cases. The critique offered by the paper also reveals that because researchers do not necessarily identify with a lifecycle representation, alternative ways of representing research need to be developed.
Originality/value
The paper offers a systematic analysis of visualisations of research and RDM current in the Library and Information Studies literature revealing the strengths and weaknesses of the lifecycle metaphor.
Details
Keywords
The purpose of this paper is to outline the key themes and discussions which came out of the 2011 UK Serials Group (UKSG) conference.
Abstract
Purpose
The purpose of this paper is to outline the key themes and discussions which came out of the 2011 UK Serials Group (UKSG) conference.
Design/methodology/approach
The conference is introduced and some of the key sessions are described and evaluated, then the report is drawn to a close with a brief conclusion which summarizes the main themes.
Findings
Many changes in user demands and the future role for libraries, librarians and publishers within the scholarly research sector; libraries need to shift services from “place to space” (physical location to online) to better fulfill their users' needs; many changes in the book industry such as a rapid increase in the number of E‐book sales; an increase in the use of mobile devices such as e‐readers and tablets; publishers are experimenting with new formats, such as print on demand; the lack of library funding, its impact on research output and the on‐going struggle for library survival was a reoccurring theme throughout the conference.
Originality/value
This conference report is relevant to librarians, publishers and information professionals in all sectors.
Details
Keywords
Kenning Arlitsch, Jonathan Wheeler, Minh Thi Ngoc Pham and Nikolaus Nova Parulian
This study demonstrates that aggregated data from the Repository Analytics and Metrics Portal (RAMP) have significant potential to analyze visibility and use of institutional…
Abstract
Purpose
This study demonstrates that aggregated data from the Repository Analytics and Metrics Portal (RAMP) have significant potential to analyze visibility and use of institutional repositories (IR) as well as potential factors affecting their use, including repository size, platform, content, device and global location. The RAMP dataset is unique and public.
Design/methodology/approach
The webometrics methodology was followed to aggregate and analyze use and performance data from 35 institutional repositories in seven countries that were registered with the RAMP for a five-month period in 2019. The RAMP aggregates Google Search Console (GSC) data to show IR items that surfaced in search results from all Google properties.
Findings
The analyses demonstrate large performance variances across IR as well as low overall use. The findings also show that device use affects search behavior, that different content types such as electronic thesis and dissertation (ETD) may affect use and that searches originating in the Global South show much higher use of mobile devices than in the Global North.
Research limitations/implications
The RAMP relies on GSC as its sole data source, resulting in somewhat conservative overall numbers. However, the data are also expected to be as robot free as can be hoped.
Originality/value
This may be the first analysis of aggregate use and performance data derived from a global set of IR, using an openly published dataset. RAMP data offer significant research potential with regard to quantifying and characterizing variances in the discoverability and use of IR content.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0328
Details