Search results

1 – 10 of over 4000
Article
Publication date: 13 November 2009

Yaojun Han, Changjun Jiang and Xuemei Luo

The purpose of this paper is to present a scheduling model, scheduling algorithms, and formal model and analysis techniques for concurrency transaction in grid database

Abstract

Purpose

The purpose of this paper is to present a scheduling model, scheduling algorithms, and formal model and analysis techniques for concurrency transaction in grid database environment.

Design/methodology/approach

Classical transaction models and scheduling algorithms developed for homogeneous distributed architecture will not work in the grid architecture and should be revisited for this new and evolving architecture. The conventional model is improved by three‐level transaction scheduling model and the scheduling algorithms for concurrency transaction is improved by considering transmission time of a transaction, user's priority, and the number of database sites accessed by the transaction as a priority of the transaction. Aiming at the problems of analysis and modeling of the transaction scheduling in grid database, colored dynamic time Petri nets (CDTPN) model are proposed. Then the reachability of the transaction scheduling model is analyzed.

Findings

The three‐level transaction scheduling model not only supports the autonomy of grid but also lightens the pressure of communication. Compared with classical transaction scheduling algorithms, the algorithms not only support the correctness of the data but also improve the effectiveness of the system. The CDTPN model is convenient for modeling and analyzing dynamic performance of grid transaction. Some important results such as abort‐ratio and turnover‐time are gotten by analyzing reachability of CDTPN.

Originality/value

The three‐level transaction scheduling model and improved scheduling algorithms with more complex priority are presented in the paper. The paper gives a CDTPN model for modeling transaction scheduling in grid database. In CDTPN model, the time interval of a transition is a function of tokens in input places of the transition.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 28 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 28 September 2010

Yong Hu, Dianliang Wu, Xiumin Fan and Xijin Zhen

Owing to the numerous part models and massive datasets used in automobile assembly design, virtual assembly software cannot simulate a whole vehicle smoothly in real time. For…

Abstract

Purpose

Owing to the numerous part models and massive datasets used in automobile assembly design, virtual assembly software cannot simulate a whole vehicle smoothly in real time. For this reason, implementing a new virtual assembly environment for massive complex datasets would be a significant achievement. The paper aims to focus on this problem.

Design/methodology/approach

A new system named “Grid‐enabled collaborative virtual assembly environment” (GCVAE) is proposed in the paper, and it comprises three parts: a private grid‐based support platform running on an inner network of enterprise; a service‐based parallel rendering framework with a sort‐last structure; and a multi‐user collaborative virtual assembly environment. These components would aggregate the idle resources in an enterprise to support assembly simulation with a large complex scene of whole vehicle.

Findings

The system prototype proposed in the paper has been implemented. The following simulations show that it can support a complex scene in a real‐time mode by using existing hardware and software, and can promote the efficient usage of enterprise resources.

Practical implications

Using the GCVAE, it is possible to aggregate the idle resources in an enterprise to run assembly simulations of a whole automobile with massively complex scenes, thus observably reducing fault occurrence rates in future manufacturing.

Originality/value

The paper introduces a new grid‐enabled methodology into research on collaborative virtual assembly system which can make the best use of idle resources in the enterprise to support assembly simulations with massively complex product models. A video‐stream‐based method was used to implement the system; this enables designers to participate ubiquitously in the simulation to evaluate the assembly of the whole automobile without hardware limitations.

Details

Assembly Automation, vol. 30 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 1 April 1997

Gil McWilliam

States that poor brand management has been held responsible for brands with which consumers have low levels of involvement, that is, consumers do not consider them important in…

7222

Abstract

States that poor brand management has been held responsible for brands with which consumers have low levels of involvement, that is, consumers do not consider them important in decision‐making terms, and in consequence appear unthinking and even uncaring about their choices. Argues that if this is the case, then arguably the vast amounts of effort and expenditure invested in brands within many grocery and fast‐moving consumer goods is potentially misplaced. Discusses the nature of high and low level involvement decision making for brands. Presents research which shows that the level of involvement is largely determined at the category level not the brand level. It is therefore beyond the scope of brand management to alter these involvement perceptions, unless they are able to create new categories or sub‐categories for their brands. Argues that this is the real challenge of brand management.

Details

Marketing Intelligence & Planning, vol. 15 no. 2
Type: Research Article
ISSN: 0263-4503

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Book part
Publication date: 16 January 2024

Ayodeji E. Oke and Seyi S. Stephen

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The…

Abstract

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The construction industry is an industry that thrives on a well-planned workflow rhythm; a change in the environmental dynamism will either have a positive or negative impact on the output of the project planned for execution. More so, raising the need for effective collaboration through workflow and project planning, grid application in construction facilitates the relationship between the project reality and the end users, all with the aim of improving resources and value management. However, decentralisation of close-domain control can cause uncertainty and incompleteness of data. And this can be a big factor, especially when a complex project is being executed.

Details

A Digital Path to Sustainable Infrastructure Management
Type: Book
ISBN: 978-1-83797-703-1

Keywords

Article
Publication date: 5 August 2014

Kamran Munir, Saad Liaquat Kiani, Khawar Hasham, Richard McClatchey, Andrew Branson and Jetendr Shamdasani

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the…

Abstract

Purpose

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the integrated neuroscience data and to enable the analyses demanded by the biomedical research community.

Design/methodology/approach

The design and development of the N4U analysis base and related information services addresses the existing research and practical challenges by offering an integrated medical data analysis environment with the necessary building blocks for neuroscientists to optimally exploit neuroscience workflows, large image data sets and algorithms to conduct analyses.

Findings

The provision of an integrated e-science environment of computational neuroimaging can enhance the prospects, speed and utility of the data analysis process for neurodegenerative diseases.

Originality/value

The N4U analysis base enables conducting biomedical data analyses by indexing and interlinking the neuroimaging and clinical study data sets stored on the grid infrastructure, algorithms and scientific workflow definitions along with their associated provenance information.

Details

Journal of Systems and Information Technology, vol. 16 no. 3
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 1 November 2006

Lisa Finder, Valeda F. Dent and Brian Lym

The paper aims to provide details of a study conducted at Hunter College Libraries in fall 2005, the focus of which was how presentation of initial digital resource pages (or…

2036

Abstract

Purpose

The paper aims to provide details of a study conducted at Hunter College Libraries in fall 2005, the focus of which was how presentation of initial digital resource pages (or gateway pages) on the library's web site impacted students' subsequent steps in the research process.

Design/methodology/approach

A group of 16 students from English and History classes at Hunter College were recruited to participate after having had basic library instruction. They were given computer‐based key tasks to perform in a proctored classroom setting, using the library's homepage. A second group of students was recruited to participate in two small focus groups. The methodology and exercises were developed in part using guidelines from a taxonomy of user behavior developed by librarians at Hunter College, and recommendations from usability literature by Krug, Neilsen and Rubin.

Findings

Results from the computer‐based key tasks exercises were bifurcated. Completion rates for computer‐based key tasks using the in‐house developed Hunter College Library database grid, with less than 80 percent (37 percent‐73 percent) students successfully completing all the tasks, was inferior compared to performance using the Serial Solutions access page and the Academic Search Premier database, both commercially‐developed products, with most of the tasks successfully completed by at least 80 percent of the students.

Originality/value

This study is unique in that the focus is not on the usability of an entire library web site, rather, on the presentation of select, highly visible gateway pages that get a lot of use.

Details

The Electronic Library, vol. 24 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Open Access
Article
Publication date: 5 April 2023

Xinghua Shan, Zhiqiang Zhang, Fei Ning, Shida Li and Linlin Dai

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent…

1367

Abstract

Purpose

With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent, including complexity of business handling process, low efficiency of ticket inspection and high cost of usage and management. This paper aims to make extensive references to successful experiences of electronic ticket applications both domestically and internationally. The research on key technologies and system implementation of railway electronic ticket with Chinese characteristics has been carried out.

Design/methodology/approach

Research in key technologies is conducted including synchronization technique in distributed heterogeneous database system, the grid-oriented passenger service record (PSR) data storage model, efficient access to massive PSR data under high concurrency condition, the linkage between face recognition service platforms and various terminals in large scenarios, and two-factor authentication of the e-ticket identification code based on the key and the user identity information. Focusing on the key technologies and architecture the of existing ticketing system, multiple service resources are expanded and developed such as electronic ticket clusters, PSR clusters, face recognition clusters and electronic ticket identification code clusters.

Findings

The proportion of paper ticket printed has dropped to 20%, saving more than 2 billion tickets annually since the launch of the application of E-ticketing nationwide. The average time for passengers to pass through the automatic ticket gates has decreased from 3 seconds to 1.3 seconds, significantly improving the efficiency of passenger transport organization. Meanwhile, problems of paper ticket counterfeiting, reselling and loss have been generally eliminated.

Originality/value

E-ticketing has laid a technical foundation for the further development of railway passenger transport services in the direction of digitalization and intelligence.

Details

Railway Sciences, vol. 2 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Article
Publication date: 1 July 2004

Wei Wang

With the ever‐increasing impact of information technology (IT) on society, universities are pushed to deliver the courses through the Internet by the format of Web course or…

1647

Abstract

With the ever‐increasing impact of information technology (IT) on society, universities are pushed to deliver the courses through the Internet by the format of Web course or online study. There are relatively few studies of how students construe this new mode of delivery and study. This study aimed to better understanding of how students construe the online learning with the framework of personal construct psychology. A total of 41 students participated in the study, generating more than 500 elements regarding the online versus traditional learning mode. The grid analysis results in terms of students’ construct are reported in this paper. The implications of understanding and facilitating the “match” between the institutional drive for online delivery and students’ constructs are discussed in the context of online learning.

Details

Campus-Wide Information Systems, vol. 21 no. 3
Type: Research Article
ISSN: 1065-0741

Keywords

1 – 10 of over 4000