Search results

1 – 10 of over 17000
Open Access
Article
Publication date: 31 July 2023

Jingrui Ge, Kristoffer Vandrup Sigsgaard, Bjørn Sørskot Andersen, Niels Henrik Mortensen, Julie Krogh Agergaard and Kasper Barslund Hansen

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups…

1123

Abstract

Purpose

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups and end-to-end process diagnostics to further locate potential performance issues. A question-based performance evaluation approach is introduced to support the selection and derivation of case-specific indicators based on diagnostic aspects.

Design/methodology/approach

The case research method is used to develop the proposed framework. The generic parts of the framework are built on existing maintenance performance measurement theories through a literature review. In the case study, empirical maintenance data of 196 emergency shutdown valves (ESDVs) are collected over a two-year period to support the development and validation of the proposed approach.

Findings

To improve processes, companies need a separate performance measurement structure. This paper suggests a hierarchical model in four layers (objective, domain, aspect and performance measurement) to facilitate the selection and derivation of indicators, which could potentially reduce management complexity and help prioritize continuous performance improvement. Examples of new indicators are derived from a case study that includes 196 ESDVs at an offshore oil and gas production plant.

Originality/value

Methodological approaches to deriving various performance indicators have rarely been addressed in the maintenance field. The proposed diagnostic framework provides a structured way to identify and locate process performance issues by creating indicators that can bridge generic evaluation aspects and maintenance data. The framework is highly adaptive as data availability functions are used as inputs to generate indicators instead of passively filtering out non-applicable existing indicators.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 11 June 2024

Julian Rott, Markus Böhm and Helmut Krcmar

Process mining (PM) has emerged as a leading technology for gaining data-based insights into organizations’ business processes. As processes increasingly cross-organizational…

Abstract

Purpose

Process mining (PM) has emerged as a leading technology for gaining data-based insights into organizations’ business processes. As processes increasingly cross-organizational boundaries, firms need to conduct PM jointly with multiple organizations to optimize their operations. However, current knowledge on cross-organizational process mining (coPM) is widely dispersed. Therefore, we synthesize current knowledge on coPM, identify challenges and enablers of coPM, and build a socio-technical framework and agenda for future research.

Design/methodology/approach

We conducted a literature review of 66 articles and summarized the findings according to the framework for Information Technology (IT)-enabled inter-organizational coordination (IOC) and the refined PM framework. The former states that within inter-organizational relationships, uncertainty sources determine information processing needs and coordination mechanisms determine information processing capabilities, while the fit between needs and capabilities determines the relationships’ performance. The latter distinguishes three categories of PM activities: cartography, auditing and navigation.

Findings

Past literature focused on coPM techniques, for example, algorithms for ensuring privacy and PM for cartography. Future research should focus on socio-technical aspects and follow four steps: First, determine uncertainty sources within coPM. Second, design, develop and evaluate coordination mechanisms. Third, investigate how the mechanisms assist with handling uncertainty. Fourth, analyze the impact on coPM performance. In addition, we present 18 challenges (e.g. integrating distributed data) and 9 enablers (e.g. aligning different strategies) for coPM application.

Originality/value

This is the first article to systematically investigate the status quo of coPM research and lay out a socio-technical research agenda building upon the well-established framework for IT-enabled IOC.

Details

Business Process Management Journal, vol. 30 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 20 July 2020

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds…

2060

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 9 March 2020

Rebecca Wolf, Joseph M. Reilly and Steven M. Ross

This article informs school leaders and staffs about existing research findings on the use of data-driven decision-making in creating class rosters. Given that teachers are the…

2467

Abstract

Purpose

This article informs school leaders and staffs about existing research findings on the use of data-driven decision-making in creating class rosters. Given that teachers are the most important school-based educational resource, decisions regarding the assignment of students to particular classes and teachers are highly impactful for student learning. Classroom compositions of peers can also influence student learning.

Design/methodology/approach

A literature review was conducted on the use of data-driven decision-making in the rostering process. The review addressed the merits of using various quantitative metrics in the rostering process.

Findings

Findings revealed that, despite often being purposeful about rostering, school leaders and staffs have generally not engaged in data-driven decision-making in creating class rosters. Using data-driven rostering may have benefits, such as limiting the questionable practice of assigning the least effective teachers in the school to the youngest or lowest performing students. School leaders and staffs may also work to minimize negative peer effects due to concentrating low-achieving, low-income, or disruptive students in any one class. Any data-driven system used in rostering, however, would need to be adequately complex to account for multiple influences on student learning. Based on the research reviewed, quantitative data alone may not be sufficient for effective rostering decisions.

Practical implications

Given the rich data available to school leaders and staffs, data-driven decision-making could inform rostering and contribute to more efficacious and equitable classroom assignments.

Originality/value

This article is the first to summarize relevant research across multiple bodies of literature on the opportunities for and challenges of using data-driven decision-making in creating class rosters.

Details

Journal of Research in Innovative Teaching & Learning, vol. 14 no. 2
Type: Research Article
ISSN: 2397-7604

Keywords

Open Access
Article
Publication date: 8 June 2015

Elisabeth Ilie-Zudor, Anikó Ekárt, Zsolt Kemeny, Christopher Buckingham, Philip Welch and Laszlo Monostori

– The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

8107

Abstract

Purpose

The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

Design/methodology/approach

The paper establishes an overview of challenges and opportunities of current significance in the area of big data, specifically in the context of transparency and processes in heterogeneous enterprise networks. Within this context, the paper presents how existing components and purpose-driven research were combined for a solution implemented in a nationwide network for less-than-truckload consignments.

Findings

Aside from providing an extended overview of today’s big data situation, the findings have shown that technical means and methods available today can comprise a feasible process transparency solution in a large heterogeneous network where legacy practices, reporting lags and incomplete data exist, yet processes are sensitive to inadequate policy changes.

Practical implications

The means introduced in the paper were found to be of utility value in improving process efficiency, transparency and planning in logistics networks. The particular system design choices in the presented solution allow an incremental introduction or evolution of resource handling practices, incorporating existing fragmentary, unstructured or tacit knowledge of experienced personnel into the theoretically founded overall concept.

Originality/value

The paper extends previous high-level view on the potential of big data, and presents new applied research and development results in a logistics application.

Details

Supply Chain Management: An International Journal, vol. 20 no. 4
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 14 October 2022

Thomas W. Jackson and Ian Richard Hodgkinson

In the pursuit of net-zero, the decarbonization activities of organizations are a critical feature of any sustainability strategy. However, government policy and recent…

8207

Abstract

Purpose

In the pursuit of net-zero, the decarbonization activities of organizations are a critical feature of any sustainability strategy. However, government policy and recent technological innovations do not address the digital carbon footprint of organizations. The paper aims to present the concept of single-use dark data and how knowledge reuse by organizations is a means to digital decarbonization.

Design/methodology/approach

Businesses in all sectors must contribute to reducing digital carbon emissions globally, and to the best of the authors’ knowledge, this paper is the first to examine “how” from a knowledge (re)use perspective. Drawing on insights from the knowledge creation process, the paper presents a set of pathways to greater knowledge reuse for the reduction of organizations’ digital carbon footprint.

Findings

Businesses continually collect, process and store knowledge but generally fail to reuse these knowledge assets – referred to as dark data. Consequently, this dark data has a huge impact on energy use and global emissions. This model is the first to show explicit pathways that businesses can follow to sustainable knowledge practices.

Practical implications

If businesses are to be proactive in their collective pursuit of net-zero, then it becomes paramount that reducing the digital carbon footprint becomes a key sustainability target. The paper presents how this might be accomplished, offering practical and actionable guidance to businesses for digital decarbonization.

Originality/value

Two critical questions are facing businesses: how can decarbonization be achieved? And can it be achieved at a low-cost? Awareness of the damaging impact digitalization may be having on the environment is in its infancy, yet knowledge reuse is a proactive and cost-effective route to reduce carbon emissions, which is explored in the paper.

Open Access
Article
Publication date: 2 February 2023

Chiara Bertolin and Elena Sesana

The overall objective of this study is envisaged to provide decision makers with actionable insights and access to multi-risk maps for the most in-danger stave churches (SCs…

1415

Abstract

Purpose

The overall objective of this study is envisaged to provide decision makers with actionable insights and access to multi-risk maps for the most in-danger stave churches (SCs) among the existing 28 churches at high spatial resolution to better understand, reduce and mitigate single- and multi-risk. In addition, the present contribution aims to provide decision makers with some information to face the exacerbation of the risk caused by the expected climate change.

Design/methodology/approach

Material and data collection started with the consultation of the available literature related to: (1) SCs' conservation status, (2) available methodologies suitable in multi-hazard approach and (3) vulnerability leading indicators to consider when dealing with the impact of natural hazards specifically on immovable cultural heritage.

Findings

The paper contributes to a better understanding of place-based vulnerability with local mapping dimension also considering future threats posed by climate change. The results highlight the danger at which the SCs of Røldal, in case of floods, and of Ringebu, Torpo and Øye, in case of landslide, may face and stress the urgency of increasing awareness and preparedness on these potential hazards.

Originality/value

The contribution for the first time aims to homogeneously collect and report all together existing spread information on architectural features, conservation status and geographical attributes for the whole group of SCs by accompanying this information with as much as possible complete 2D sections collection from existing drawings and novel 3D drawn sketches created for this contribution. Then the paper contributes to a better understanding of place-based vulnerability with local mapping dimension also considering future threats posed by climate change. Then it highlights the danger of floods and landslides at which the 28 SCs are subjected. Finally it reports how these risks will change under the ongoing impact of climate change.

Details

International Journal of Building Pathology and Adaptation, vol. 42 no. 1
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 4 July 2024

Bart Lameijer, Elizabeth S.L. de Vries, Jiju Antony, Jose Arturo Garza-Reyes and Michael Sony

Many organizations currently transition towards digitalized process design, execution, control, assurance and improvement, and the purpose of this research is to empirically…

Abstract

Purpose

Many organizations currently transition towards digitalized process design, execution, control, assurance and improvement, and the purpose of this research is to empirically demonstrate how data-based operational excellence techniques are useful in digitalized environments by means of the optimization of a robotic process automation deployment.

Design/methodology/approach

An interpretive mixed-method case study approach comprising both secondary Lean Six Sigma (LSS) project data together with participant-as-observer archival observations is applied. A case report, comprising per DMAIC phase (1) the objectives, (2) the main deliverables, (3) the results and (4) the key actions leading to achieving the presented results is presented.

Findings

Key findings comprise (1) the importance of understanding how to acquire and prepare large system generated data and (2) the need for better large system-generated database validation mechanisms. Finally (3) the importance of process contextual understanding of the LSS project lead is emphasized, together with (4) the need for LSS foundational curriculum developments in order to be effective in digitalized environments.

Originality/value

This study provides a rich prescriptive demonstration of LSS methodology implementation for RPA deployment improvement, and is one of the few empirical demonstrations of LSS based problem solving methodology in industry 4.0 contexts.

Details

Business Process Management Journal, vol. 30 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 20 January 2023

Marisa Agostini, Daria Arkhipova and Chiara Mio

This paper aims to identify, synthesise and critically examine the extant academic research on the relation between big data analytics (BDA), corporate accountability and…

4020

Abstract

Purpose

This paper aims to identify, synthesise and critically examine the extant academic research on the relation between big data analytics (BDA), corporate accountability and non-financial disclosure (NFD) across several disciplines.

Design/methodology/approach

This paper uses a structured literature review methodology and applies “insight-critique-transformative redefinition” framework to interpret the findings, develop critique and formulate future research directions.

Findings

This paper identifies and critically examines 12 research themes across four macro categories. The insights presented in this paper indicate that the nature of the relationship between BDA and accountability depends on whether an organisation considers BDA as a value creation instrument or as a revenue generation source. This paper discusses how NFD can effectively increase corporate accountability for ethical, social and environmental consequences of BDA.

Practical implications

This paper presents the results of a structured literature review exploring the state-of-the-art of academic research on the relation between BDA, NFD and corporate accountability. This paper uses a systematic approach, to provide an exhaustive analysis of the phenomenon with rigorous and reproducible research criteria. This paper also presents a series of actionable insights of how corporate accountability for the use of big data and algorithmic decision-making can be enhanced.

Social implications

This paper discusses how NFD can reduce negative social and environmental impact stemming from the corporate use of BDA.

Originality/value

To the best of the authors’ knowledge, this paper is the first one to provide a comprehensive synthesis of academic literature, identify research gaps and outline a prospective research agenda on the implications of big data technologies for NFD and corporate accountability along social, environmental and ethical dimensions.

Details

Sustainability Accounting, Management and Policy Journal, vol. 14 no. 7
Type: Research Article
ISSN: 2040-8021

Keywords

Content available
Book part
Publication date: 10 February 2023

Amrinder Singh, Geetika Madaan, H R Swapna and Anuj Kumar

Introduction: Coronavirus-19 (COVID-19) global outbreak poses a danger to millions of people’s health and the uncertainty and financial prudence around the world. Without a doubt…

Abstract

Introduction: Coronavirus-19 (COVID-19) global outbreak poses a danger to millions of people’s health and the uncertainty and financial prudence around the world. Without a doubt, the sickness will place a tremendous strain on healthcare systems, which existing or traditional-based treatments cannot adequately handle. Only intelligence derived from diverse data sources can provide the foundation for rigorous clinical and social responses that optimise the use of constrained healthcare resources, create tailored patient treatment plans, educate policy-makers, and accelerate clinical trials

Purpose: This chapter aims to incorporate innovative practices of artificial intelligence (AI) into local, national, and global healthcare systems that can save lives of people and as well helps in human capital management ways that may be deployed rapidly and effectively with minimal errors.

Methodology: AI technologies and tools play a crucial part in COVID-19 crisis response by assisting with the virus discovery, early detection, and the development of effective medications and therapies. In this chapter, significant issues related to COVID-19 and how they may be addressed by applying HRM practices with recent advances in AI. Also, through a literature review of the recent studies implemented in a similar context, an AI solution is proposed by formulating a conceptual model.

Findings: This chapter offers that the latest AI techniques can assist policy-makers in implementing modern human capital management practices to fight against COVID-19. The goal is to remotely monitor patients utilising gadgets that are embedded with state-of-the-art medical technology. To limit hospital visits, or at least cut them down to a minimum, on the one hand, the health clinic also wants to deliver reliable health information to the doctors before or during virtual consultations.

Details

The Adoption and Effect of Artificial Intelligence on Human Resources Management, Part A
Type: Book
ISBN: 978-1-80382-027-9

Keywords

1 – 10 of over 17000