Search results

1 – 10 of over 1000
Article
Publication date: 20 August 2018

Dipty Tripathi, Shreya Banerjee and Anirban Sarkar

Business process workflow is a design conceptualization to automate the sequence of activities to achieve a business goal with involved participants and a predefined set of rules…

Abstract

Purpose

Business process workflow is a design conceptualization to automate the sequence of activities to achieve a business goal with involved participants and a predefined set of rules. Regarding this, a formal business workflow model is a prime requisite to implement a consistent and rigorous business process. In this context, majority of the existing research works are formalized structural features and have not focused on functional and behavioral design aspects of business processes. To address this problem, this paper aims to propose a formal model of business process workflow called as business process workflow using typed attributed graph (BPWATG) enriched with structural, functional and behavioral characteristics of business processes.

Design/methodology/approach

Typed attributed graph (ATG) and first-order logic have been used to formalize proposed BPWATG to provide rigorous syntax and semantics towards business process workflows. This is an effort to execute a business workflow on an automated machine. Further, the proposed BPWATG is illustrated using a case study to show the expressiveness of proposed model. Besides, the proposed graph is initially validated using generic modelling environment (GME) case tool. Moreover, a comparative study is performed with existing formal approaches based on several crucial features to exhibit the effectiveness of proposed BPWATG.

Findings

The proposed model is capable of facilitating structural, functional and behavioral aspects of business process workflows using several crucial features such as dependency conceptualization, timer concepts, exception handling and deadlock detection. These features are used to handle real-world problems and ensure the consistency and correctness of business workflows.

Originality/value

BPWATG is proposed to formalize a business workflow that is required to make a model of business process machine-readable. Besides, formalizations of dependency conceptualization, exception handling, deadlock detection and time-out concepts are specified. Moreover, several non-functional properties (reusability, scalability, flexibility, dynamicity, reliability and robustness) are supported by the proposed model.

Details

International Journal of Web Information Systems, vol. 14 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 17 October 2019

Qiong Bu, Elena Simperl, Adriane Chapman and Eddy Maddalena

Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to…

1290

Abstract

Purpose

Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality.

Design/methodology/approach

The authors present a model to capture the information of the workflow, questions and answers for both single- and multiple-question classification tasks. They propose an adapted approach on top of the classic approach so that the model can handle tasks with several multiple-choice questions in general instead of a specific domain or any specific hierarchical classifications. They evaluate their approach with three representative tasks from existing citizen science projects in which they have the gold standard created by experts.

Findings

The results show that the approach can provide significant improvements to the overall classification accuracy. The authors’ analysis also demonstrates that all algorithms can achieve higher accuracy for the volunteer- versus paid-generated data sets for the same task. Furthermore, the authors observed interesting patterns in the relationship between the performance of different algorithms and workflow-specific factors including the number of steps and the number of available options in each step.

Originality/value

Due to the nature of crowdsourcing, aggregating the collected data is an important process to understand the quality of crowdsourcing results. Different inference algorithms have been studied for simple microtasks consisting of single questions with two or more answers. However, as classification tasks typically contain many questions, the proposed method can be applied to a wide range of tasks including both single- and multiple-question classification tasks.

Details

International Journal of Crowd Science, vol. 3 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 28 June 2023

Gema Bueno de la Fuente, Carmen Agustín-Lacruz, Mariângela Spotti Lopes Fujita and Ana Lúcia Terra

The purpose of this study is to analyse the recommendations on knowledge organisation from guidelines, policies and procedure manuals of a sample of institutional repositories and…

Abstract

Purpose

The purpose of this study is to analyse the recommendations on knowledge organisation from guidelines, policies and procedure manuals of a sample of institutional repositories and networks within the Latin American area and observe the level of follow-up of international guidelines.

Design/methodology/approach

Presented is an exploratory and descriptive study of repositories’ professional documents. This study comprised four steps: definition of convenience sample; development of data codebook; coding of data; and analysis of data and conclusions drawing. The convenience sample includes representative sources at three levels: local institutional repositories, national aggregators and international network and aggregators. The codebook gathers information from the repositories’ sample, such as institutional rules and procedure manuals openly available, or recommendations on the use of controlled vocabularies.

Findings

The results indicate that at the local repository level, the use of controlled vocabularies is not regulated, leaving the choice of terms to the authors’ discretion. It results in a set of unstructured keywords, not standardised terms, mixing subject terms with other authorities on persons, institutions or places. National aggregators do not regulate these issues either and limit to pointing to international guidelines and policies, which simply recommend the use of controlled vocabularies, using URIs to facilitate interoperability.

Originality/value

The originality of this study lies in identifying how the principles of knowledge organisation are effectively applied by institutional repositories, at local, national and international levels.

Article
Publication date: 18 October 2019

Yongsun Choi, N. Long Ha, Pauline Kongsuwan and Kwan Hee Han

The refined process structure tree (RPST), the hierarchy of non-overlapping single-entry single-exit (SESE) regions of a process model, has been utilized for better comprehension…

Abstract

Purpose

The refined process structure tree (RPST), the hierarchy of non-overlapping single-entry single-exit (SESE) regions of a process model, has been utilized for better comprehension and more efficient analysis of business process models. Existing RPST methods, based on the triconnected components of edges, fail to identify a certain type of SESE region. The purpose of this paper is to introduce an alternative method for generating a complete RPST utilizing rather simple techniques.

Design/methodology/approach

The proposed method first focuses on the SESE regions of bonds and rigids, from the innermost ones to the outermost ones, utilizing dominance and post-dominance relations. Then, any SESE region of a series nested in a bond or a rigid is identified with a depth-first search variation. Two-phase algorithms and their completeness proofs, a software tool incorporating visualization of stepwise outcomes, and the experimental results of the proposed method are provided.

Findings

The proposed method utilizes simple techniques that allow their straightforward implementation. Visualization of stepwise outcomes helps process analysts to understand the proposed method and the SESE regions. Experiments with 604 SAP reference models demonstrated the limitation of the existing RPST methods. The proposed method, however, completely identified all types of SESE regions, defined with nodes, in less computation time than with the old methods.

Originality/value

Each triconnected component of the undirected version of a process model is associated with a pair of boundary nodes without discriminating between the entry and the exit. Here, each non-atomic SESE region is associated with two distinct entry and exit nodes from the original model in the form of a directed graph. By specifying the properties of SESE regions in more comprehensible ways, this paper facilitates a deeper understanding of SESE regions rather than relying on the resulting RPST.

Details

Business Process Management Journal, vol. 26 no. 2
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 3 February 2012

Joel H. Helquist, Amit Deokar, Jordan J. Cox and Alyssa Walker

The purpose of this paper is to propose virtual process simulation as a technique for identifying and analyzing uncertainty in processes. Uncertainty is composed of both risks and…

1240

Abstract

Purpose

The purpose of this paper is to propose virtual process simulation as a technique for identifying and analyzing uncertainty in processes. Uncertainty is composed of both risks and opportunities.

Design/methodology/approach

Virtual process simulation involves the creation of graphical models representing the process of interest and associated tasks. Graphical models representing the resources (e.g. people, facilities, tools, etc.) are also created. The members of the resources graphical models are assigned to process tasks in all possible combinations. Secondary calculi, representing uncertainty, are imposed upon these models to determine scores. From the scores, changes in process structure or resource allocation can be used to manage uncertainty.

Findings

The example illustrates the benefits of utilizing virtual process simulation in process pre‐planning. Process pre‐planning can be used as part of strategic or operational uncertainty management.

Practical implications

This paper presents an approach to clarify and assess uncertainty in new processes. This modeling technique enables the quantification of measures and metrics to assist in systematic uncertainty analysis. Virtual process simulation affords process designers the ability to more thoroughly examine uncertainty while planning processes.

Originality/value

This research contributes to the study of uncertainty management by promoting a systematic approach that quantifies metrics and measures according to the objectives of a given process.

Details

Business Process Management Journal, vol. 18 no. 1
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 27 July 2010

Henry H. Bi

Although software systems used to automate business processes have been becoming rather advanced, the existing practice of developing and modifying graphical process models in…

Abstract

Purpose

Although software systems used to automate business processes have been becoming rather advanced, the existing practice of developing and modifying graphical process models in those software systems is still primitive: users have to manually add, change, or delete each node and arc piece by piece. Since such manual operations are typically tedious, time‐consuming, and prone to errors, it is desirable to develop an alternative approach. This paper aims to address this issue.

Design/methodology/approach

In this paper, a novel, human‐understandable process manipulation language (PML) for specifying operations (e.g. insertion, deletion, merging, and split) on process models is developed. A prototype system to demonstrate PML is also developed.

Findings

The paper finds that manipulation operations on process models can be standardized and, thus, can be facilitated and automated through using a structured language like PML.

Originality/value

PML can improve manipulation operations on process models over the existing manual approach in two aspects: first, using PML, users only need to specify what operations are to be performed on process models, and then a computer carries out specified operations as well as performs other routine operations (e.g. generating nodes and arcs). This feature minimizes user effort to deal with low‐level details on nodes and arcs. Second, using PML, users can systematically specify operations on process models, thus reducing arbitrary operations and problems in process models.

Details

Business Process Management Journal, vol. 16 no. 4
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 12 September 2023

Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…

Abstract

Purpose

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.

Design/methodology/approach

The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.

Findings

The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.

Originality/value

The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 13 June 2019

Grégory Millot, Olivier Scholz, Saïd Ouhamou, Mathieu Becquet and Sébastien Magnabal

The paper deals with research activities to develop optimization workflows implying computational fluid dynamics (CFD) modelling. The purpose of this paper is to present an…

Abstract

Purpose

The paper deals with research activities to develop optimization workflows implying computational fluid dynamics (CFD) modelling. The purpose of this paper is to present an industrial and fully-automated optimal design tool, able to handle objectives, constraints, multi-parameters and multi-points optimization on a given CATIA CAD. The work is realized on Rapid And CostEffective Rotorcraft compound rotorcraft in the framework of the Fast RotorCraft Innovative Aircraft Demonstrator Platform (IADP) within the Clean Sky 2 programme.

Design/methodology/approach

The proposed solution relies on an automated CAD-CFD workflow called through the optimization process based on surrogate-based optimization (SBO) techniques. The SBO workflow has been specifically developed.

Findings

The methodology is validated on a simple configuration (bended pipe with two parameters). Then, the process is applied on a full compound rotorcraft to minimize the flow distortion at the engine entry. The design of the experiment and the optimization loop act on seven design parameters of the air inlet and for each individual the evaluation is performed on two operation points, namely, cruise flight and hover case. Finally, the best design is analyzed and aerodynamic performances are compared with the initial design.

Originality/value

The adding value of the developed process is to deal with geometric integration conflicts addressed through a specific CAD module and the implementation of a penalty function method to manage the unsuccessful evaluation of any individual.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 30 no. 9
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 9 November 2012

Petko Kitanov, Odile Marcotte, Wil H.A. Schilders and Suzanne M. Shontz

To simulate large parasitic resistive networks, one must reduce the size of the circuit models through methods that are accurate and preserve terminal connectivity and network…

Abstract

Purpose

To simulate large parasitic resistive networks, one must reduce the size of the circuit models through methods that are accurate and preserve terminal connectivity and network sparsity. The purpose here is to present such a method, which exploits concepts from graph theory in a systematic fashion.

Design/methodology/approach

The model order reduction problem is formulated for parasitic resistive networks through graph theory concepts and algorithms are presented based on the notion of vertex cut in order to reduce the size of electronic circuit models. Four variants of the basic method are proposed and their respective merits discussed.

Findings

The algorithms proposed enable the production of networks that are significantly smaller than those produced by earlier methods, in particular the method described in the report by Lenaers entitled “Model order reduction for large resistive networks”. The reduction in the number of resistors achieved through the algorithms is even more pronounced in the case of large networks.

Originality/value

The paper seems to be the first to make a systematic use of vertex cuts in order to reduce a parasitic resistive network.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 31 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 1000