Search results

1 – 10 of over 1000
Article
Publication date: 8 October 2020

Kan Ngamakeur and Sira Yongchareon

The paper aims to study realization requirements for the flexible enactment of artifact-centric business processes in a dynamic, collaborative environment and to develop a workflow

Abstract

Purpose

The paper aims to study realization requirements for the flexible enactment of artifact-centric business processes in a dynamic, collaborative environment and to develop a workflow execution framework that can effectively address those requirements.

Design/methodology/approach

This study proposed a framework and contract-based, event-driven architecture design and implementation that can directly realize collaborative artifact-centric business processes in service-oriented architecture (SOA) without any model conversion.

Findings

The results show that the approach is feasible in presenting several key benefits over the use of existing workflow systems to run artifact-centric processes.

Originality/value

Most of the existing approaches require an artifact-centric model to be transformed into executable workflow languages to run on existing workflow management systems. This study argues that the model conversion can incur losses of information and affect traceability and monitoring ability of workflows, especially in an SOA where a workflow can span across multiple inter-business entities.

Details

International Journal of Web Information Systems, vol. 16 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 26 July 2011

Xiaoping Qiu, Gangqiao Shi, Changlin Song and Yang Xu

This paper aims to discuss in detail a feasible realization method of workflow engine for enterprise information management on the basis of database technology.

1069

Abstract

Purpose

This paper aims to discuss in detail a feasible realization method of workflow engine for enterprise information management on the basis of database technology.

Design/methodology/approach

Under the guidance of workflow management coalition (WfMC), the data model of the workflow engine is first presented based on the given process model, in which the attributes of process, activity and its relationships with role, application, workflow relation data and transfer condition are marked out. Then the basic control principles of the workflow engine are designed based on the necessary tables of process instances and activity instances, in which the control method of process instance and activity instance are discussed in detail including the creation, startup, management or status evolvement of the instance.

Findings

In the research, the workflow engine is successfully programmed as this realization method on the development platform of SQL server 2000 and Visual Studio 2005 and the results show the effectiveness of the workflow engine for inventory information management.

Originality/value

The paper gives a feasible realization method for business process management in enterprises using the advanced workflow technology, which can assign flexibility to the information management and improve the whole performance of an enterprise while facing changing market requirements.

Details

Journal of Enterprise Information Management, vol. 24 no. 4
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 1 March 1995

Cyndie Tamaki

Presents a methodology which library managers can use to make necessary changes to their departments. The business system review is a methodology to develop strategies to make the…

Abstract

Presents a methodology which library managers can use to make necessary changes to their departments. The business system review is a methodology to develop strategies to make the library's workflow more efficient to meet its goals. The steps outlined address two aspects of change: analysis of the library's processes; and helping the staff to become part of the change.

Details

The Bottom Line, vol. 8 no. 3
Type: Research Article
ISSN: 0888-045X

Keywords

Article
Publication date: 1 May 2003

Giorgos Papavassiliou and Gregoris Mentzas

In this paper we present a new approach for integrating knowledge management and business process management. We focus on the modelling of weakly‐structured knowledge‐intensive…

1909

Abstract

In this paper we present a new approach for integrating knowledge management and business process management. We focus on the modelling of weakly‐structured knowledge‐intensive business processes. We develop a framework for modelling this type of processes that explicitly considers knowledge‐related tasks and knowledge objects and present a workflow tool that is an implementation of our theoretical meta‐model. As an example, we sketch one case study, the process for granting full old age pension as it is performed in the Greek Social Security Institution. Finally we briefly describe some related approaches and compare them to our work and draw the main conclusions and further research directions.

Details

Journal of Knowledge Management, vol. 7 no. 2
Type: Research Article
ISSN: 1367-3270

Keywords

Content available
Article
Publication date: 26 July 2011

Cengiz Kahraman

1002

Abstract

Details

Journal of Enterprise Information Management, vol. 24 no. 4
Type: Research Article
ISSN: 1741-0398

Article
Publication date: 1 October 2006

Michael Hafner, Ruth Breu, Berthold Agreiter and Andrea Nowak

This contribution aims to present the core components of a framework and illustrate the main concepts of a methodology for the systematic design and realization of…

1085

Abstract

Purpose

This contribution aims to present the core components of a framework and illustrate the main concepts of a methodology for the systematic design and realization of security‐critical inter‐organizational workflows with a portion of a workflow‐scenario drawn from e‐government. It is additionally shown how the framework can be adapted to incorporate advanced security patterns like the Qualified Signature, which extends the concept of digital signature by requiring a natural person to sign.

Design/methodology/approach

The framework is based on a methodology that focuses on the correct implementation of security‐requirements and consists of a suite of tools that facilitates the cost‐efficient realization and management of decentralized, security‐critical workflows.

Findings

The framework has been prototypically validated through case studies from the healthcare and e‐government sector. Positive results in pilot applications with industrial partners encourage further steps: the set of supported security requirements is continuously extended (e.g. rights delegation, four eyes principle), a testing environment for industrial settings is being implemented, and the requirements for the efficient management of inter‐organizational workflows are being analysed systematically.

Practical implications

The framework caters to the needs of an industrial audience, in need of a cost‐efficient support for the systematic and correct realization of secure, inter‐organizational workflows.

Originality/value

The contribution provides a description of the Sectet framework. It is shown how it can be adapted to incorporate advanced security patterns like the Qualified Signature, which implement a legal requirement specific to e‐government.

Details

Internet Research, vol. 16 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 27 August 2014

Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…

Abstract

Purpose

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.

Design/methodology/approach

The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.

Findings

The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.

Originality/value

D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.

Details

Program, vol. 48 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 29 April 2021

Dae-Kyoo Kim and Yeasun K. Chung

The authors use the extension mechanism provided by the Business Process Model and Notation (BPMN) to define roles, which allows roles to be fully aligned with the BPMN standard…

Abstract

Purpose

The authors use the extension mechanism provided by the Business Process Model and Notation (BPMN) to define roles, which allows roles to be fully aligned with the BPMN standard. The authors describe how a pattern can be defined in terms of roles and present the formal semantics of pattern realization and refinement to support systematic reuse of patterns in business process development.

Design/methodology/approach

It is widely agreed that the use of business process patterns improves the efficiency and quality of business process development. However, few techniques are available to describe business process patterns at an appropriate level of abstraction to facilitate the reuse of patterns. To address this, this paper presents the role-based Business Process Model and Notation (R-BPMN), an extension of BPMN for abstract modeling of business process patterns based on a novel notion of role.

Findings

The authors apply R-BPMN in case studies for pattern realization and refinement and discuss tool support via an existing tool. The case studies demonstrate the practical benefits of R-BPMN in capturing pattern variability and facilitating pattern reuse.

Practical implications

The findings imply a potential impact of R-BPMN on practical benefits when it is supported at the metamodel level in tool development.

Originality/value

This study addresses the need for abstract modeling of process patterns at the metamodel level, which facilitates the formalization of pattern variability and tool development to support various realizations of process patterns at the model level.

Details

Business Process Management Journal, vol. 27 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 16 August 2019

Morteza Moradi, Mohammad Moradi, Farhad Bayat and Adel Nadjaran Toosi

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant…

3904

Abstract

Purpose

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.

Design/methodology/approach

According to the extensive acceptance and efficiency of crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the (complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework are delineated. Moreover, some discussion on architectural and realization issues are presented.

Findings

The paper describes the conceptual architecture, workflow and schematic representation of a new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements, practical roadmap and architectural notes are explained.

Originality/value

The major contribution of this work is introducing the conceptual foundations to combine and integrate collective intelligence of humans and machines to achieve higher efficiency and (computing) performance. To the best of the authors’ knowledge, this the first study in which such a blessing integration is considered. Therefore, it is believed that the proposed computing concept could inspire researchers toward realizing such unprecedented possibilities in practical and theoretical contexts.

Details

International Journal of Crowd Science, vol. 3 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 3 April 2017

Adrian Burton, Hylke Koers, Paolo Manghi, Sandro La Bruzzo, Amir Aryani, Michael Diepenbroek and Uwe Schindler

Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for…

1490

Abstract

Purpose

Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These operate in silos so that content cannot be readily combined to deliver a network graph connecting research data and literature in a comprehensive and reliable way. The Research Data Alliance (RDA) Publishing Data Services Working Group (PDS-WG) aims to address this issue of fragmentation by bringing together different stakeholders to agree on a common infrastructure for sharing links between datasets and literature. The paper aims to discuss these issues.

Design/methodology/approach

This paper presents the synergic effort of the RDA PDS-WG and the OpenAIRE infrastructure toward enabling a common infrastructure for exchanging data-literature links by realizing and operating the Data-Literature Interlinking (DLI) Service. The DLI Service populates and provides access to a graph of data set-literature links (at the time of writing close to five million, and growing) collected from a variety of major data centers, publishers, and research organizations.

Findings

To achieve its objectives, the Service proposes an interoperable exchange data model and format, based on which it collects and publishes links, thereby offering the opportunity to validate such common approach on real-case scenarios, with real providers and consumers. Feedback of these actors will drive continuous refinement of the both data model and exchange format, supporting the further development of the Service to become an essential part of a universal, open, cross-platform, cross-discipline solution for collecting, and sharing data set-literature links.

Originality/value

This realization of the DLI Service is the first technical, cross-community, and collaborative effort in the direction of establishing a common infrastructure for facilitating the exchange of data set-literature links. As a result of its operation and underlying community effort, a new activity, name Scholix, has been initiated involving the technological level stakeholders such as DataCite and CrossRef.

Details

Program, vol. 51 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

1 – 10 of over 1000