Search results

1 – 10 of 124
Open Access
Article
Publication date: 2 February 2018

Wil van der Aalst

Process mining provides a generic collection of techniques to turn event data into valuable insights, improvement ideas, predictions, and recommendations. This paper uses…

8790

Abstract

Purpose

Process mining provides a generic collection of techniques to turn event data into valuable insights, improvement ideas, predictions, and recommendations. This paper uses spreadsheets as a metaphor to introduce process mining as an essential tool for data scientists and business analysts. The purpose of this paper is to illustrate that process mining can do with events what spreadsheets can do with numbers.

Design/methodology/approach

The paper discusses the main concepts in both spreadsheets and process mining. Using a concrete data set as a running example, the different types of process mining are explained. Where spreadsheets work with numbers, process mining starts from event data with the aim to analyze processes.

Findings

Differences and commonalities between spreadsheets and process mining are described. Unlike process mining tools like ProM, spreadsheets programs cannot be used to discover processes, check compliance, analyze bottlenecks, animate event data, and provide operational process support. Pointers to existing process mining tools and their functionality are given.

Practical implications

Event logs and operational processes can be found everywhere and process mining techniques are not limited to specific application domains. Comparable to spreadsheet software widely used in finance, production, sales, education, and sports, process mining software can be used in a broad range of organizations.

Originality/value

The paper provides an original view on process mining by relating it to the spreadsheets. The value of spreadsheet-like technology tailored toward the analysis of behavior rather than numbers is illustrated by the over 20 commercial process mining tools available today and the growing adoption in a variety of application domains.

Details

Business Process Management Journal, vol. 24 no. 1
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 28 July 2020

Gopi Battineni, Nalini Chintalapudi and Francesco Amenta

Medical training is a foundation on which better health care quality has been built. Freshly graduated doctors have required a good knowledge of practical competencies, which…

Abstract

Medical training is a foundation on which better health care quality has been built. Freshly graduated doctors have required a good knowledge of practical competencies, which demands the importance of medical training activities. As of this, we propose a methodology to discover a process model for identifying the sequence of medical training activities that had implemented in the installation of a Central Venous Catheter (CVC) with the ultrasound technique. A dataset with twenty medical video recordings were composed with events in the CVC installation. To develop the process model, the adoption of process mining techniques of infrequent Inductive Miner (iIM) with a noise threshold value of 0.3 had done. A combination of parallel and sequential events of the process model was developed. Besides, process conformance was validated with replay fitness value about 61.1%, and it provided evidence that four activities were not correctly fit in the process model. The present study can assist upcoming doctors involved in CVCs surgery by providing continuous training and feedback on better patient care.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 June 2020

Sergey Tsiulin, Kristian Hegner Reinau, Olli-Pekka Hilmola, Nikolay Goryaev and Ahmed Karam

The purpose of this paper is to examine and to categorize the tendencies of blockchain-based applications in the shipping industry and supply chain as well as the interrelations…

13238

Abstract

Purpose

The purpose of this paper is to examine and to categorize the tendencies of blockchain-based applications in the shipping industry and supply chain as well as the interrelations between them, including possible correlation of found categories with theoretical background and existing concepts. This study also explores whether blockchain can be adopted into existing maritime shipping and port document workflow management.

Design/methodology/approach

The current study builds a conceptual framework through a systematic project review carried along with scientific and grey literature, published in journals and conference proceedings during the past decade and giving information or proposals on an issue.

Findings

The results showed that reviewed projects can be compiled into three main conceptual areas: document workflow management, financial processes and device connectivity. However, having clear interlinkages, none of the reviewed projects consider all three areas at once. Concepts associated with maritime document workflow received broad support among the reviewed projects. In addition, reviewed projects unintentionally follow the similar goals that were laid down within port management scientific projects before the introduction of blockchain technology.

Originality/value

This study contributes to research by revealing a consistent framework for understanding the blockchain applications within maritime port environment, a less-studied part of blockchain implementation in the supply chain field. Moreover, this work is the first to find out conceptual intersections and correlations between existing projects, mapping current tendencies and potentially increasing knowledge about the field.

Details

Review of International Business and Strategy, vol. 30 no. 2
Type: Research Article
ISSN: 2059-6014

Keywords

Open Access
Article
Publication date: 31 July 2023

Sara Lafia, David A. Bleckley and J. Trent Alexander

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use…

Abstract

Purpose

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use. Digitization transforms paper-based collections into more accessible and analyzable formats. As collections are digitized, there is an opportunity to incorporate deep learning techniques, such as Document Image Analysis (DIA), into workflows to increase the usability of information extracted from archival documents. This paper describes the authors' approach using digital scanning, optical character recognition (OCR) and deep learning to create a digital archive of administrative records related to the mortgage guarantee program of the Servicemen's Readjustment Act of 1944, also known as the G.I. Bill.

Design/methodology/approach

The authors used a collection of 25,744 semi-structured paper-based records from the administration of G.I. Bill Mortgages from 1946 to 1954 to develop a digitization and processing workflow. These records include the name and city of the mortgagor, the amount of the mortgage, the location of the Reconstruction Finance Corporation agent, one or more identification numbers and the name and location of the bank handling the loan. The authors extracted structured information from these scanned historical records in order to create a tabular data file and link them to other authoritative individual-level data sources.

Findings

The authors compared the flexible character accuracy of five OCR methods. The authors then compared the character error rate (CER) of three text extraction approaches (regular expressions, DIA and named entity recognition (NER)). The authors were able to obtain the highest quality structured text output using DIA with the Layout Parser toolkit by post-processing with regular expressions. Through this project, the authors demonstrate how DIA can improve the digitization of administrative records to automatically produce a structured data resource for researchers and the public.

Originality/value

The authors' workflow is readily transferable to other archival digitization projects. Through the use of digital scanning, OCR and DIA processes, the authors created the first digital microdata file of administrative records related to the G.I. Bill mortgage guarantee program available to researchers and the general public. These records offer research insights into the lives of veterans who benefited from loans, the impacts on the communities built by the loans and the institutions that implemented them.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 11 May 2023

Marco D’Orazio, Gabriele Bernardini and Elisa Di Giuseppe

This paper aims to develop predictive methods, based on recurrent neural networks, useful to support facility managers in building maintenance tasks, by collecting information…

2689

Abstract

Purpose

This paper aims to develop predictive methods, based on recurrent neural networks, useful to support facility managers in building maintenance tasks, by collecting information coming from a computerized maintenance management system (CMMS).

Design/methodology/approach

This study applies data-driven and text-mining approaches to a CMMS data set comprising more than 14,500 end-users’ requests for corrective maintenance actions, collected over 14 months. Unidirectional long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM) recurrent neural networks are trained to predict the priority of each maintenance request and the related technical staff assignment. The data set is also used to depict an overview of corrective maintenance needs and related performances and to verify the most relevant elements in the building and how the current facility management (FM) relates to the requests.

Findings

The study shows that LSTM and Bi-LSTM recurrent neural networks can properly recognize the words contained in the requests, thus correctly and automatically assigning the priority and predicting the technical staff to assign for each end-user’s maintenance request. The obtained global accuracy is very high, reaching 93.3% for priority identification and 96.7% for technical staff assignment. Results also show the main critical building elements for maintenance requests and the related intervention timings.

Research limitations/implications

This work shows that LSTM and Bi-LSTM recurrent neural networks can automate the assignment process of end-users’ maintenance requests if trained with historical CMMS data. Results are promising; however, the trained LSTM and Bi-LSTM RNN can be applied only to different hospitals adopting similar categorization.

Practical implications

The data-driven and text-mining approaches can be integrated into the CMMS to support corrective maintenance management by facilities management contractors, i.e. to properly and timely identify the actions to be carried out and the technical staff to assign.

Social implications

The improvement of the maintenance of the health-care system is a key component of improving health service delivery. This work shows how to reduce health-care service interruptions due to maintenance needs through machine learning methods.

Originality/value

This study develops original methods and tools easily integrable into IT workflow systems (i.e. CMMS) in the FM field.

Open Access
Article
Publication date: 16 October 2017

Xiang T.R. Kong, Ray Y. Zhong, Gangyan Xu and George Q. Huang

The purpose of this paper is to propose a concept of cloud auction robot (CAR) and its execution platform for transforming perishable food supply chain management. A new paradigm…

3300

Abstract

Purpose

The purpose of this paper is to propose a concept of cloud auction robot (CAR) and its execution platform for transforming perishable food supply chain management. A new paradigm of goods-to-person auction execution model is proposed based on CARs. This paradigm can shift the management of traditional manual working to automated execution with great space and time saving. A scalable CAR-enabled execution system (CARES) is presented to manage logistics workflows, tasks and behavior of CAR-Agents in handling the real-time events and associated data.

Design/methodology/approach

An Internet of Things enabled auction environment is designed. The robot is used to pick up and deliver the auction products and commends are given to the robot in real-time. CARES architecture is proposed while integrating three core services from auction workflow management, auction task management, to auction execution control. A system prototype was developed to show its execution through physical emulations and experiments.

Findings

The CARES could well schedule the tasks for each robot to minimize their waiting time. The total execution time is reduced by 33 percent on average. Space utilization for each auction studio is improved by about 50 percent per day.

Originality/value

The CAR-enabled execution model and system is simulated and verified in a ubiquitous auction environment so as to upgrade the perishable food supply chain management into a new level which is automated and real-time. The proposed system is flexible to cope with different auction scenarios, such as different auction mechanisms and processes, with high reconfigurability and scalability.

Details

Industrial Management & Data Systems, vol. 117 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 25 October 2023

Christian Novak, Lukas Pfahlsberger, Saimir Bala, Kate Revoredo and Jan Mendling

Digitalization, innovation and changing customer requirements drive the continuous improvement of an organization's business processes. IT demand management (ITDM) as a…

Abstract

Purpose

Digitalization, innovation and changing customer requirements drive the continuous improvement of an organization's business processes. IT demand management (ITDM) as a methodology supports the holistic governance of IT and the corresponding business process change (BPC), by allocating resources to meet a company's requirements and strategic objectives. As ITDM decision-makers are not fully aware of how the as-is business processes operate and interact, making informed decisions that positively impact the to-be process is a key challenge.

Design/methodology/approach

In this paper, the authors address this challenge by developing a novel approach that integrates process mining and ITDM. To this end, the authors conduct an action research study where the researchers participated in the design, creation and evaluation of the approach. The proposed approach is illustrated using two sample demands of an insurance claims process. These demands are used to construct the artefact in multiple research circles and to validate the approach in practice. The authors applied learning and reflection methods for incrementally adjusting this study’s approach.

Findings

The study shows that the utilization of process mining activities during process changes on an operational level contributes to (1) increasing accuracy and efficiency of ITDM; (2) timely identification of potential risks and dependencies and (3) support of testing and acceptance of IT demands.

Originality/value

The implementation of this study’s approach improved ITDM practice. It appropriately addressed the information needs of decision-makers and unveiled the effects and consequences of process changes. Furthermore, providing a clearer picture of the process dependencies clarified the responsibilities and the interfaces at the intra- and inter-process level.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 19 June 2019

Gideon Nkurunziza, John Munene, Joseph Ntayi and Will Kaberuka

The purpose of this paper is to study the relationship between organizational adaptability, institutional leadership and business process reengineering performance using the…

5887

Abstract

Purpose

The purpose of this paper is to study the relationship between organizational adaptability, institutional leadership and business process reengineering performance using the tested complexity theory in a developing economy setting.

Design/methodology/approach

This study is correlation and cross-sectional and adopts institutional-level data collected via questionnaires from reengineered microfinance institutions in Uganda. Cluster analysis as data mining technique was used to classify cases based on respondents’ opinions into homogeneous clusters. Nvivo was used to understand the perceptions of business process reengineering performance based on qualitative data. The authors used structural equation modeling to derive the predictive model of business process reengineering performance in a developing world setting.

Findings

The authors find that organizational adaptability and institutional leadership are key predictors of business process reengineering performance. Results reveal a predictive model of 61 per cent based on structural equation modeling for the study variables. Cluster analysis as data mining approach explored complex patterns of reengineered business processes.

Research limitations/implications

The use of cluster analysis is susceptible to problems associated with sampling error and absence of fit indices. However, the likelihood of these problems is reduced by the interaction with the data, practical implications and use of smart partial least square to generate structural equations based on derived measurement models of each study variable.

Practical implications

Policymakers of Bank of Uganda, Ministry of Finance and Economic Planning, should develop sound policies in relation to knowledge management, institutional leadership and adaptive mechanisms to enhance business process reengineering performance to take advantage of new knowledge opportunities for the improvement of their businesses.

Social implications

Given the results from structural equations generated, managers need to consider institutional leadership and organizational adaptability as key drivers of business process reengineering performance in microfinance institutions. The results confirm the significant role of institutional leadership, organizational adaptability in determining business process reengineering performance outcomes.

Originality/value

Unlike most of the business process reengineering literature, this study contributes to literature by domesticating and testing complexity theory to explain business process reengineering performance in developing economies.

Details

Innovation & Management Review, vol. 16 no. 2
Type: Research Article
ISSN: 2515-8961

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4526

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 17 October 2019

Qiong Bu, Elena Simperl, Adriane Chapman and Eddy Maddalena

Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to…

1287

Abstract

Purpose

Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality.

Design/methodology/approach

The authors present a model to capture the information of the workflow, questions and answers for both single- and multiple-question classification tasks. They propose an adapted approach on top of the classic approach so that the model can handle tasks with several multiple-choice questions in general instead of a specific domain or any specific hierarchical classifications. They evaluate their approach with three representative tasks from existing citizen science projects in which they have the gold standard created by experts.

Findings

The results show that the approach can provide significant improvements to the overall classification accuracy. The authors’ analysis also demonstrates that all algorithms can achieve higher accuracy for the volunteer- versus paid-generated data sets for the same task. Furthermore, the authors observed interesting patterns in the relationship between the performance of different algorithms and workflow-specific factors including the number of steps and the number of available options in each step.

Originality/value

Due to the nature of crowdsourcing, aggregating the collected data is an important process to understand the quality of crowdsourcing results. Different inference algorithms have been studied for simple microtasks consisting of single questions with two or more answers. However, as classification tasks typically contain many questions, the proposed method can be applied to a wide range of tasks including both single- and multiple-question classification tasks.

Details

International Journal of Crowd Science, vol. 3 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

1 – 10 of 124