Search results

1 – 10 of over 34000
Open Access
Article
Publication date: 16 October 2017

Xiang T.R. Kong, Ray Y. Zhong, Gangyan Xu and George Q. Huang

The purpose of this paper is to propose a concept of cloud auction robot (CAR) and its execution platform for transforming perishable food supply chain management. A new paradigm…

3286

Abstract

Purpose

The purpose of this paper is to propose a concept of cloud auction robot (CAR) and its execution platform for transforming perishable food supply chain management. A new paradigm of goods-to-person auction execution model is proposed based on CARs. This paradigm can shift the management of traditional manual working to automated execution with great space and time saving. A scalable CAR-enabled execution system (CARES) is presented to manage logistics workflows, tasks and behavior of CAR-Agents in handling the real-time events and associated data.

Design/methodology/approach

An Internet of Things enabled auction environment is designed. The robot is used to pick up and deliver the auction products and commends are given to the robot in real-time. CARES architecture is proposed while integrating three core services from auction workflow management, auction task management, to auction execution control. A system prototype was developed to show its execution through physical emulations and experiments.

Findings

The CARES could well schedule the tasks for each robot to minimize their waiting time. The total execution time is reduced by 33 percent on average. Space utilization for each auction studio is improved by about 50 percent per day.

Originality/value

The CAR-enabled execution model and system is simulated and verified in a ubiquitous auction environment so as to upgrade the perishable food supply chain management into a new level which is automated and real-time. The proposed system is flexible to cope with different auction scenarios, such as different auction mechanisms and processes, with high reconfigurability and scalability.

Details

Industrial Management & Data Systems, vol. 117 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 March 2003

Ying‐Nan Chen, Li‐Ming Tseng and Yi‐Ming Chen

Presents a framework for deciding on a good execution strategy for a given program based on the available data and task parallelism in the program on PC laboratory clusters…

Abstract

Presents a framework for deciding on a good execution strategy for a given program based on the available data and task parallelism in the program on PC laboratory clusters. Proposes a virtual cluster scheduling scheme to take account of the relationships between tasks for task parallelism, and also processor speed, processor load and network environment to balance load for data parallelism in a PC cluster environment. The approach is very effective in terms of the overall execution time, and demonstrates the feasibility of automatic cluster assignment, processor set selection and data partition functions for data and task parallel programs.

Details

Campus-Wide Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1065-0741

Keywords

Article
Publication date: 23 August 2013

Sattanathan Subramanian, Paweł Sztromwasser, Pål Puntervoll and Kjell Petersen

eScience workflows use orchestration for integrating and coordinating distributed and heterogeneous scientific resources, which are increasingly exposed as web services. The rate…

Abstract

Purpose

eScience workflows use orchestration for integrating and coordinating distributed and heterogeneous scientific resources, which are increasingly exposed as web services. The rate of growth of scientific data makes eScience workflows data‐intensive, challenging existing workflow solutions. Efficient methods of handling large data in scientific workflows based on web services are needed. The purpse of this paper is to address this issue.

Design/methodology/approach

In a previous paper the authors proposed Data‐Flow Delegation (DFD) as a means to optimize orchestrated workflow performance, focusing on SOAP web services. To improve the performance further, they propose pipelined data‐flow delegation (PDFD) for web service‐based eScience workflows in this paper, by leveraging from the domain of parallel programming. Briefly, PDFD allows partitioning of large datasets into independent subsets that can be communicated in a pipelined manner.

Findings

The results show that the PDFD improves the execution time of the workflow considerably and is capable of handling much larger data than the non‐pipelined approach.

Practical implications

Execution of a web service‐based workflow hampered by the size of data can be facilitated or improved by using services supporting Pipelined Data‐Flow Delegation.

Originality/value

Contributions of this work include the proposed concept of combining pipelining and Data‐Flow Delegation, an XML Schema supporting the PDFD communication between services, and the practical evaluation of the PDFD approach.

Details

International Journal of Web Information Systems, vol. 9 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 August 2014

Sucha Smanchat and Suchon Sritawathon

This paper aims to propose a scheduling technique for parameter sweep workflows, which are used in parametric study and optimization. When executed in multiple parallel instances…

Abstract

Purpose

This paper aims to propose a scheduling technique for parameter sweep workflows, which are used in parametric study and optimization. When executed in multiple parallel instances in the grid environment, it is necessary to address bottleneck and load balancing to achieve an efficient execution.

Design/methodology/approach

A bottleneck detection approach is based on commonly known performance metrics of grid resources. To address load balancing, a resource requirement similarity metric is introduced to determine the likelihood of the distribution of tasks across available grid resources, which is referred to as an execution context. The presence of a bottleneck and the execution context are used in the main algorithm, named ABeC, to schedule tasks selectively at run-time to achieve a better overall execution time or makespan.

Findings

According to the results of the simulations against four existing algorithms using several scenarios, the proposed technique performs, at least, similarly to the existing four algorithms in most cases and achieves better performance when scheduling workflows have a parallel structure.

Originality/value

The bottleneck detection and the load balancing proposed in this paper require only common resource and task information, rendering it applicable to most workflow systems. The proposed scheduling technique, through such selective behaviour, may help reduce the time required for the execution of multiple instances of a grid workflow that is to be executed in parallel.

Details

International Journal of Web Information Systems, vol. 10 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 March 2013

Ting Chen, Xiao‐song Zhang, Xu Xiao, Yue Wu, Chun‐xiang Xu and Hong‐tian Zhao

Software vulnerabilities have been the greatest threat to the software industry for a long time. Many detection techniques have been developed to address this kind of issue, such…

Abstract

Purpose

Software vulnerabilities have been the greatest threat to the software industry for a long time. Many detection techniques have been developed to address this kind of issue, such as Fuzzing, but mere Fuzz Testing is not good enough, because the Fuzzing only alters the input of program randomly, and does not consider the basic semantics of the target software. The purpose of this paper is to introduce a new vulnerability exploring system, called “SEVE” to explore the target software more deeply and to generate more test cases with more accuracy.

Design/methodology/approach

Symbolic execution is the core technique of SEVE. The user can just input a standard input, and the SEVE system will record the execution path, alter the critical branches of it, and generate a totally different test case which will make the software under test execute a different path. In this way, some potential bugs or defects, even the exploitable vulnerabilities will be discovered. To alleviate path explosion, the authors propose heuristic method and function abstraction, which in turn improve the performance of SEVE even further.

Findings

We evaluate SEVE system to record critical data about its efficiency and performance. We have tested some real‐world vulnerabilities, from which the underlying file‐input programs suffer. After that, the results show that SEVE is not only re‐creating the discovery of these vulnerabilities, but also at a higher performance level than traditional techniques.

Originality/value

The paper proposes a new vulnerability exploring system, called “SEVE” to explore the target software and generate test cases automatically and also heuristic method and function abstraction to handle path explosion.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 5 December 2018

Christian Janiesch and Jörn Kuhlenkamp

Changes in workflow relevant data of business processes at run-time can hinder their completion or impact their profitability as they have been instantiated under different…

Abstract

Purpose

Changes in workflow relevant data of business processes at run-time can hinder their completion or impact their profitability as they have been instantiated under different circumstances. The purpose of this paper is to propose a context engine to enhance a business process management (BPM) system’s context-awareness. The generic architecture provides the flexibility to configure processes during initialization as well as to adapt running instances at decision gates or during execution due to significant context change.

Design/methodology/approach

The paper discusses context-awareness as the conceptual background. The technological capabilities of business rules and complex event processing (CEP) are outlined in an architecture design. A reference process is proposed and discussed in an exemplary application.

Findings

The results provide an improvement over the current situation of static variable instantiation of business processes with local information. The proposed architecture extends the well-known combination of business rules and BPM systems with a context engine based on CEP.

Research limitations/implications

The resulting architecture for a BPM system using a context engine is generic in nature and, hence, requires to be contextualized for situated implementations. Implementation success is dependent on the availability of context information and process compensation options.

Practical implications

Practitioners receive advice on a reference architecture and technology choices for implementing systems, which can provide and monitor context information for business processes as well as intervene and adapt the execution.

Originality/value

Currently, there is no multi-purpose non-proprietary context engine based on CEP or any other technology available for BPM, which facilitates the adaptation of processes at run-time due to changes in context variables. This paper will stimulate a debate between research and practice on suitable design and technology.

Details

Business Process Management Journal, vol. 25 no. 6
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 19 September 2019

Gayatri Nayak and Mitrabinda Ray

Test suite prioritization technique is the process of modifying the order in which tests run to meet certain objectives. Early fault detection and maximum coverage of source code…

Abstract

Purpose

Test suite prioritization technique is the process of modifying the order in which tests run to meet certain objectives. Early fault detection and maximum coverage of source code are the main objectives of testing. There are several test suite prioritization approaches that have been proposed at the maintenance phase of software development life cycle. A few works are done on prioritizing test suites that satisfy modified condition decision coverage (MC/DC) criteria which are derived for safety-critical systems. The authors know that it is mandatory to do MC/DC testing for Level A type software according to RTCA/DO178C standards. The paper aims to discuss this issue.

Design/methodology/approach

This paper provides a novel method to prioritize the test suites for a system that includes MC/DC criteria along with other important criteria that ensure adequate testing.

Findings

In this approach, the authors generate test suites from the input Java program using concolic testing. These test suites are utilized to measure MC/DC% by using the coverage calculator algorithm. Now, use MC/DC% and the execution time of these test suites in the basic particle swarm optimization technique with a modified objective function to prioritize the generated test suites.

Originality/value

The proposed approach maximizes MC/DC% and minimizes the execution time of the test suites. The effectiveness of this approach is validated by experiments on 20 moderate-sized Java programs using average percentage of fault detected metric.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 July 2009

Ahmed Attar, Mohamed Amine Boudjakdji, Nadia Bhuiyan, Khaled Grine, Said Kenai and Ali Aoubed

The purpose of this paper is to show how the time frame for the execution of a construction project in Algeria is rarely respected because of organizational problems and…

Abstract

Purpose

The purpose of this paper is to show how the time frame for the execution of a construction project in Algeria is rarely respected because of organizational problems and uncertainties encountered while the execution is underway.

Design/methodology/approach

A case study on the construction of a metro station is used as a pilot project to show the effectiveness of replacing traditional construction processes by more innovative procedures. Concurrent engineering (CE) is applied to optimize the execution time of the underground structure. A numerical simulation is integrated into the construction process in order to update design parameters with real site conditions observed during the construction process.

Findings

The results show that the implementation of CE is efficient in reducing the completion time, with an 18 per cent reduction observed in this case study. A cost reduction of 20 per cent on the steel frame support and a total cost reduction of 3 per cent were obtained.

Research limitations/implications

The study demonstrates that the application of CE methods can be quite valuable in large, complex construction projects. Vulgarizing it as “the solution” to adjust time frame delay, control quality and cost, might be an issue for local construction enterprises in Algeria.

Originality/value

Using the concept of CE by overlapping different activities involved in a construction project and making use of simulation tools in the process at different stages of the execution have resulted in modifying the excavation method and hence reducing the completion times.

Details

Engineering, Construction and Architectural Management, vol. 16 no. 4
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 6 April 2010

Evi Syukur and Seng Wai Loke

Pervasive computing environments such as a pervasive campus domain, shopping, etc. will become commonplaces in the near future. The key to enhance these system environments with…

Abstract

Purpose

Pervasive computing environments such as a pervasive campus domain, shopping, etc. will become commonplaces in the near future. The key to enhance these system environments with services relies on the ability to effectively model and represent contextual information, as well as spontaneity in downloading and executing the service interface on a mobile device. The system needs to provide an infrastructure that handles the interaction between a client device that requests a service and a server which responds to the client's request via Web service calls. The system should relieve end‐users from low‐level tasks of matching services with locations or other context information. The mobile users do not need to know or have any knowledge of where the service resides, how to call a service, what the service API detail is and how to execute a service once downloaded. All these low‐level tasks can be handled implicitly by a system. The aim of this paper is to investigate the notion of context‐aware regulated services, and how they should be designed, and implemented.

Design/methodology/approach

The paper presents a detailed design, and prototype implementation of the system, called mobile hanging services (MHS), that provides the ability to execute mobile code (service application) on demand and control entities' behaviours in accessing services in pervasive computing environments. Extensive evaluation of this prototype is also provided.

Findings

The framework presented in this paper enables a novel contextual services infrastructure that allows services to be described at a high level of abstraction and to be regulated by contextual policies. This contextual policy governs the visibility and execution of contextual services in the environment. In addition, a range of contextual services is developed to illustrate different types of services used in the framework.

Originality/value

The main contribution of this paper is a high‐level model of a system for context‐aware regulated services, which consists of environments (domains and spaces), contextual software components, entities and computing devices.

Details

International Journal of Pervasive Computing and Communications, vol. 6 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 18 October 2011

Nodir Kodirov, Doo‐Hyun Kim, Junyeong Kim, Seunghwa Song and Changjoo Moon

The purpose of this paper is to make performance improvements and timely critical execution enhancements for operational flight program (OFP). The OFP is core software of…

Abstract

Purpose

The purpose of this paper is to make performance improvements and timely critical execution enhancements for operational flight program (OFP). The OFP is core software of autonomous control system of small unmanned helicopter.

Design/methodology/approach

In order to meet the time constraints and enhance control application performance, two major improvements were done at real‐time operating system (RTOS) kernel. They are thread scheduling algorithm and lock‐free thread message communication mechanism. Both of them have a direct relationship with system efficiency and indirect relationship with helicopter control application execution stability through improved deadline keeping characteristics.

Findings

In this paper, the suitability of earliest deadline first (EDF) scheduling algorithm and non‐blocking buffer (NBB) mechanism are illustrated with experimental and practical applications. Results of this work show that EDF contributes around 15 per cent finer‐timely execution and NBB enhances kernel's responsiveness around 35 per cent with respect to the number of thread context switch and CPU utilization. These apply for OFP implemented over embedded configurable operating system (eCos) RTOS on x86 architecture‐based board.

Practical implications

This paper illustrates an applicability of deadline‐based real‐time scheduling algorithm and lock‐free kernel communication mechanism for performance enhancement and timely critical execution of autonomous unmanned aerial vehicle control system.

Originality/value

This paper illustrates a novel approach to extend RTOS kernel modules based on unmanned aerial vehicle control application execution scenario. Lock‐free thread communication mechanism is implemented, and tested for applicability at RTOS. Relationship between UAV physical and computation modules are clearly illustrated via an appropriate unified modelling language (UML) collaboration and state diagrams. As experimental tests are conducted not only for a particular application, but also for various producer/consumer scenarios, these adequately demonstrate the applicability of extended kernel modules for general use.

1 – 10 of over 34000