Search results

1 – 10 of 823
Article
Publication date: 10 August 2010

Qing Yang, Hongwei Wang, Wan Hu and Wang Lijuan

In the grid‐based simulation, the resource application needed is distributed in the grid environment as grid service, and time management is a key problem in the simulation…

360

Abstract

Purpose

In the grid‐based simulation, the resource application needed is distributed in the grid environment as grid service, and time management is a key problem in the simulation system. Grid workflow provides convenience for grid user to management and executes grid services. But it emphasizes process and no time‐management, so a temporally constrained grid workflow model is pointed out based on grid flow with temporally constraint to schedule resources and manage time.

Design/methodology/approach

The temporally constrained grid workflow model is distributed model: the federate has local temporal constraints and interactive temporal constraints among federates. The problem to manage time is a temporally distributed constraint satisfaction problem given deadline time and duration time of grid services. Multi‐asynchronous weak‐commitment search (AWS) algorithm is an approach to resolve DCSP, so a practical example of a simulation project‐based grid system was presented to introduce application of Multi‐AWS algorithm.

Findings

The temporally constrained grid workflow is based temporal reasoning and grid workflow description about grid services.

Originality/value

The new problem about scheduling resources and managing time in the grid‐based simulation is pointed out; and the approach to resolve the problem is applied into a practical example.

Details

Kybernetes, vol. 39 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 12 August 2014

Sucha Smanchat and Suchon Sritawathon

This paper aims to propose a scheduling technique for parameter sweep workflows, which are used in parametric study and optimization. When executed in multiple parallel instances…

Abstract

Purpose

This paper aims to propose a scheduling technique for parameter sweep workflows, which are used in parametric study and optimization. When executed in multiple parallel instances in the grid environment, it is necessary to address bottleneck and load balancing to achieve an efficient execution.

Design/methodology/approach

A bottleneck detection approach is based on commonly known performance metrics of grid resources. To address load balancing, a resource requirement similarity metric is introduced to determine the likelihood of the distribution of tasks across available grid resources, which is referred to as an execution context. The presence of a bottleneck and the execution context are used in the main algorithm, named ABeC, to schedule tasks selectively at run-time to achieve a better overall execution time or makespan.

Findings

According to the results of the simulations against four existing algorithms using several scenarios, the proposed technique performs, at least, similarly to the existing four algorithms in most cases and achieves better performance when scheduling workflows have a parallel structure.

Originality/value

The bottleneck detection and the load balancing proposed in this paper require only common resource and task information, rendering it applicable to most workflow systems. The proposed scheduling technique, through such selective behaviour, may help reduce the time required for the execution of multiple instances of a grid workflow that is to be executed in parallel.

Details

International Journal of Web Information Systems, vol. 10 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Book part
Publication date: 16 January 2024

Ayodeji E. Oke and Seyi S. Stephen

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The…

Abstract

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The construction industry is an industry that thrives on a well-planned workflow rhythm; a change in the environmental dynamism will either have a positive or negative impact on the output of the project planned for execution. More so, raising the need for effective collaboration through workflow and project planning, grid application in construction facilitates the relationship between the project reality and the end users, all with the aim of improving resources and value management. However, decentralisation of close-domain control can cause uncertainty and incompleteness of data. And this can be a big factor, especially when a complex project is being executed.

Details

A Digital Path to Sustainable Infrastructure Management
Type: Book
ISBN: 978-1-83797-703-1

Keywords

Article
Publication date: 30 May 2008

Soha Maad and Brian Coghlan

The purpose of this paper is to overview key features of grid portals and e‐government portals and assess the potential for using features of the former in the latter. In the…

1106

Abstract

Purpose

The purpose of this paper is to overview key features of grid portals and e‐government portals and assess the potential for using features of the former in the latter. In the context of this paper, grid portals are defined as graphical user interfaces that a user employs to interact with one or more grid infrastructural resources.

Design/methodology/approach

The paper classifies grid portals in five categories and two development frameworks and based on this classification overviews ten existing grid portals. The overview covers, where possible, the developers, the objective, the implementation, and the features of the considered grid portals. For e‐government, the paper focuses on the overview of a typical e‐government portal and best design practices. Based on the overview of grid portals and the typical e‐government portal, the paper assesses the potential benefit of grid portals in meeting the critical success factors for e‐government identified as: integration, knowledge management, personalization, and customer engagement. The results are tabulated, analysed, and discussed.

Findings

Many of the features of existing grid portals have the potential to be used within an e‐government portal, but the lack of any in‐depth study of the nature of the e‐government application domain (from a technical and social perspective) in‐line with grid development makes this potential far from reachable at this stage. This is disappointing but does highlight opportunities.

Practical implications

This paper motivates a greater in‐depth analysis and study of the potential use of the grid for e‐government. The grid infrastructure promises solutions to various applications domains including e‐government.

Originality/value

This paper explored the potential of a technology infrastructure for e‐government. This exploration is based on a novel dual overview and evaluation of the technology and the application domain. The paper can be a basis and a reference for further research in different areas including, among others: technology infrastructures for e‐government, grid development for various application domains, benchmarking of grid utility and usability for various application domains, grid gateways, and emerging technologies to meet the critical success factors for e‐government.

Details

Transforming Government: People, Process and Policy, vol. 2 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 12 June 2009

Alexander Voss and Rob Procter

The purpose of this paper is to investigate the implications of the emergence of virtual research environments (VREs) and related e‐research tools for scholarly work and…

2103

Abstract

Purpose

The purpose of this paper is to investigate the implications of the emergence of virtual research environments (VREs) and related e‐research tools for scholarly work and communications processes.

Design/methodology/approach

The concepts of VREs and of e‐research more generally are introduced and relevant literature is reviewed. On this basis, the authors discuss the developing role they play in research practices across a number of disciplines and how scholarly communication is beginning to evolve in response to the opportunities these new tools open up and the challenges they raise.

Findings

Virtual research environments are beginning to change the ways in which researchers go about their work and how they communicate with each other and with other stakeholders such as publishers and service providers. The changes are driven by the changing landscape of data production, curation and (re‐)use, by new scientific methods, by changes in technology supply and the increasingly interdisciplinary nature of research in many domains.

Research limitations/implications

The paper is based on observations drawn from a number of projects in which the authors are investigating the uptake of advanced ICT in research. The paper describes the role of VREs as enablers of changing research practices and the ways in which they engender changes in scholarly work and communications.

Practical implications

Librarians and other information professionals need to be aware of how advanced ICTs are being used by researchers to change the ways they work and communicate. Through their experiences with the integration of virtual learning environments within library information services, they are well placed to inform developments that may well change scholarly communications fundamentally.

Originality/value

The paper contributes to emerging discussions about the likely trajectory and impact of advanced ICTs on research and their implications for those, such as librarians and other information professionals, who occupy important support roles.

Details

Library Hi Tech, vol. 27 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 5 August 2014

Kamran Munir, Saad Liaquat Kiani, Khawar Hasham, Richard McClatchey, Andrew Branson and Jetendr Shamdasani

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the…

Abstract

Purpose

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the integrated neuroscience data and to enable the analyses demanded by the biomedical research community.

Design/methodology/approach

The design and development of the N4U analysis base and related information services addresses the existing research and practical challenges by offering an integrated medical data analysis environment with the necessary building blocks for neuroscientists to optimally exploit neuroscience workflows, large image data sets and algorithms to conduct analyses.

Findings

The provision of an integrated e-science environment of computational neuroimaging can enhance the prospects, speed and utility of the data analysis process for neurodegenerative diseases.

Originality/value

The N4U analysis base enables conducting biomedical data analyses by indexing and interlinking the neuroimaging and clinical study data sets stored on the grid infrastructure, algorithms and scientific workflow definitions along with their associated provenance information.

Details

Journal of Systems and Information Technology, vol. 16 no. 3
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 29 March 2013

Peter Paul Beran, Elisabeth Vinek and Erich Schikuta

The optimization of quality‐of‐service (QoS) aware service selection problems is a crucial issue in both grids and distributed service‐oriented systems. When several…

Abstract

Purpose

The optimization of quality‐of‐service (QoS) aware service selection problems is a crucial issue in both grids and distributed service‐oriented systems. When several implementations per service exist, one has to be selected for each workflow step. This paper aims to address these issues.

Design/methodology/approach

The authors proposed several heuristics with specific focus on blackboard and genetic algorithms. Their applicability and performance has already been assessed for static systems. In order to cover real‐world scenarios, the approaches are required to deal with dynamics of distributed systems.

Findings

The proposed algorithms prove their feasibility in terms of scalability and runtime performance, taking into account their adaptability to system changes.

Research limitations/implications

In this paper, the authors propose a representation of the dynamic aspects of distributed systems and enhance their algorithms to efficiently capture them.

Practical implications

By combining both algorithms, the authors envision a global approach to QoS‐aware service selection applicable to static and dynamic systems.

Originality/value

The authors prove the feasibility of their hybrid approach by deploying the algorithms in a cloud environment (Google App Engine), that allows simulating and evaluating different system configurations.

Details

International Journal of Web Information Systems, vol. 9 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 March 2015

Liu Jia and Xie Kefan

After the occurrence of a disaster, emergency supplies should arrive at disaster area in the shortest possible time. Therefore, it is of pivotal importance to speed up the…

Abstract

Purpose

After the occurrence of a disaster, emergency supplies should arrive at disaster area in the shortest possible time. Therefore, it is of pivotal importance to speed up the preparation and scheduling process. In other words, only when the preparation process and scheduling process coordinate well, could the emergency supplies arrive at disaster area in time. Consequently, the purpose of this paper is to explore a method that can strengthen the coordination in various kinds of situations.

Design/methodology/approach

The paper first elaborates the preparation and scheduling process of emergency supplies in disasters. Then, it establishes a workflow simulation system of the emergency supplies preparation and scheduling based on Petri Net. Afterward, the paper proposes a simplified simulation system of emergency supplies preparation and scheduling which can be employed in actual emergency response. Finally, the paper takes China Lushan Earthquake for a case study.

Findings

By employing the simulation system proposed by this paper, decision makers can simulate the whole emergency supplies preparation and scheduling process, which can help them find a method that can optimize the current process. Specifically, by analyzing the simulation results, the government can obtain conclusions as follows. First, whether the preparation and scheduling process of emergency supplies can speed up or not. Second, which part of the process should be improved to realize the acceleration. Third, the workload of the staffs and experts. Fourth, whether it is necessary to add staffs or experts to work in parallel.

Originality/value

This paper proposes a system that can coordinate the preparation process and scheduling process of emergency supplies in disaster. Then, it employs the Petri Net based workflow model to do simulation. The simulation results show that the system designed is reasonable and can be used in practical decision making on the preparation and scheduling of emergency supplies.

Details

Kybernetes, vol. 44 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 September 2021

Prashant Kumar Sinha, Sagar Bhimrao Gajbe, Sourav Debnath, Subhranshubhusan Sahoo, Kanu Chakraborty and Shiva Shankar Mahato

This work provides a generic review of the existing data mining ontologies (DMOs) and also provides a base platform for ontology developers and researchers for gauging the…

Abstract

Purpose

This work provides a generic review of the existing data mining ontologies (DMOs) and also provides a base platform for ontology developers and researchers for gauging the ontologies for satisfactory coverage and usage.

Design/methodology/approach

The study uses a systematic literature review approach to identify 35 DMOs in the domain between the years 2003 and 2021. Various parameters, like purpose, design methodology, operations used, language representation, etc. are available in the literature to review ontologies. Accompanying the existing parameters, a few parameters, like semantic reasoner used, knowledge representation formalism was added and a list of 20 parameters was prepared. It was then segregated into two groups as generic parameters and core parameters to review DMOs.

Findings

It was observed that among the 35 papers under the study, 26 papers were published between the years 2006 and 2016. Larisa Soldatova, Saso Dzeroski and Pance Panov were the most productive authors of these DMO-related publications. The ontological review indicated that most of the DMOs were domain and task ontologies. Majority of ontologies were formal, modular and represented using web ontology language (OWL). The data revealed that Ontology development 101, METHONTOLOGY was the preferred design methodology, and application-based approaches were preferred for evaluation. It was also observed that around eight ontologies were accessible, and among them, three were available in ontology libraries as well. The most reused ontologies were OntoDM, BFO, OBO-RO, OBI, IAO, OntoDT, SWO and DMOP. The most preferred ontology editor was Protégé, whereas the most used semantic reasoner was Pellet. Even ontology metrics for 16 DMOs were also available.

Originality/value

This paper carries out a basic level review of DMOs employing a parametric approach, which makes this study the first of a kind for the review of DMOs.

Details

Data Technologies and Applications, vol. 56 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 August 2013

Sattanathan Subramanian, Paweł Sztromwasser, Pål Puntervoll and Kjell Petersen

eScience workflows use orchestration for integrating and coordinating distributed and heterogeneous scientific resources, which are increasingly exposed as web services. The rate…

Abstract

Purpose

eScience workflows use orchestration for integrating and coordinating distributed and heterogeneous scientific resources, which are increasingly exposed as web services. The rate of growth of scientific data makes eScience workflows data‐intensive, challenging existing workflow solutions. Efficient methods of handling large data in scientific workflows based on web services are needed. The purpse of this paper is to address this issue.

Design/methodology/approach

In a previous paper the authors proposed Data‐Flow Delegation (DFD) as a means to optimize orchestrated workflow performance, focusing on SOAP web services. To improve the performance further, they propose pipelined data‐flow delegation (PDFD) for web service‐based eScience workflows in this paper, by leveraging from the domain of parallel programming. Briefly, PDFD allows partitioning of large datasets into independent subsets that can be communicated in a pipelined manner.

Findings

The results show that the PDFD improves the execution time of the workflow considerably and is capable of handling much larger data than the non‐pipelined approach.

Practical implications

Execution of a web service‐based workflow hampered by the size of data can be facilitated or improved by using services supporting Pipelined Data‐Flow Delegation.

Originality/value

Contributions of this work include the proposed concept of combining pipelining and Data‐Flow Delegation, an XML Schema supporting the PDFD communication between services, and the practical evaluation of the PDFD approach.

Details

International Journal of Web Information Systems, vol. 9 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of 823