Search results

1 – 10 of 268
Article
Publication date: 6 November 2017

Hind Hamrouni, Fabio Grandi and Zouhaier Brahmia

A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing…

99

Abstract

Purpose

A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing corrective actions (e.g. payment of arrears after a retroactive salary increase) either immediately (i.e. at inconsistency detection time) or in a deferred manner, at one or several chosen repair times according to application requirements. The purpose of this work is to deal with deferred and multi-step repair of detected data inconsistencies.

Design/methodology/approach

A general approach for deferred and stepwise repair of inconsistencies that result from retroactive updates of currency data (e.g. the salary of an employee) in a valid-time or bitemporal XML database is proposed. The approach separates the inconsistency repairs from the inconsistency detection phase and deals with the execution of corrective actions, which also take into account enterprise’s business rules that define some relationships between data.

Findings

Algorithms, methods and support data structures for deferred and multi-step inconsistency repair of currency data are presented. The feasibility of the approach has been shown through the development and testing of a system prototype, named Deferred-Repair Manager.

Originality/value

The proposed approach implements a new general and flexible strategy for repairing detected inconsistencies in a deferred manner and possibly in multiple steps, according to varying user’s requirements and to specifications which are customary in the real world.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 September 2019

Zouhaier Brahmia, Fabio Grandi and Rafik Bouaziz

Any XML schema definition can be organized according to one of the following design styles: “Russian Doll”, “Salami Slice”, “Venetian Blind” and “Garden of Eden” (with the…

Abstract

Purpose

Any XML schema definition can be organized according to one of the following design styles: “Russian Doll”, “Salami Slice”, “Venetian Blind” and “Garden of Eden” (with the additional “Bologna” style actually representing absence of style). Conversion from a design style to another can facilitate the reuse and exchange of schema specifications encoded using the XML schema language. Without any computer-aided engineering support, style conversions must be performed very carefully as they are difficult and error-prone operations. The purpose of this paper is to efficiently deal with such XML schema design style conversions.

Design/methodology/approach

A general approach, named StyleVolution, for automatic management of XML schema design style conversions, is proposed. StyleVolution is equipped with a suite of seven procedures: four for converting a valid XML schema from any other design style to the “Garden of Eden” style, which has been chosen as a normalized XML schema format, and three for converting from the “Garden of Eden” style to any of the other desired design styles.

Findings

Procedures, algorithms and methods for XML schema design style conversions are presented. The feasibility of the approach has been shown through the encoding (using the XQuery language) and the testing (with the Altova XMLSpy 2019 tool) of a suite of seven ready-to-use procedures. Moreover, four test procedures are provided for checking the conformance of a given input XML schema to a schema design style.

Originality/value

The proposed approach implements a new technique for efficiently managing XML schema design style conversions, which can be used to make any given XML schema file to conform to a desired design style.

Details

International Journal of Web Information Systems, vol. 16 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 September 2005

Z.M. Ma

To provide a selective bibliography for researchers and practitioners interested in database modeling of engineering information with sources which can help them develop…

1948

Abstract

Purpose

To provide a selective bibliography for researchers and practitioners interested in database modeling of engineering information with sources which can help them develop engineering information systems.

Design/methodology/approach

Identifies the requirements for engineering information modeling and then investigates how current database models satisfy these requirements at two levels: conceptual data models and logical database models.

Findings

Presents the relationships among the conceptual data models and the logical database models for engineering information modeling viewed from database conceptual design.

Originality/value

Currently few papers provide comprehensive discussions about how current engineering information modeling can be supported by database technologies. This paper fills this gap. The contribution of the paper is to identify the direction of database study viewed from engineering applications and provide a guidance of information modeling for engineering design, manufacturing, and production management.

Details

Industrial Management & Data Systems, vol. 105 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 28 September 2012

Goran Sladić, Branko Milosavljević, Dušan Surla and Zora Konjović

The goal of this paper is to propose a data access control framework that is used for editing MARC‐based bibliographic databases. In cases where the bibliographic record editing…

Abstract

Purpose

The goal of this paper is to propose a data access control framework that is used for editing MARC‐based bibliographic databases. In cases where the bibliographic record editing activities carried out in libraries are complex and involve many people with different skills and expertise, a way of managing the workflow and data quality is needed. Enforcing access control can contribute to these goals.

Design/methodology/approach

The proposed solution for data access control enforcement is based on the well‐studied standard role‐based access control (RBAC) model. The bibliographic data, for the purpose of this system, is represented using the XML language. The software architecture of the access control system is modelled using the Unified Modelling Language (UML).

Findings

The access control framework presented in this paper represents a successful application of concepts of role‐based access control to bibliographic databases. The use of XML language for bibliographic data representation provides the means to integrate this solution into many different library information systems, facilitates data exchange and simplifies the software implementation because of the abundance of available XML tools. The solution presented is not dependent on any particular XML schema for bibliographic records and may be used in different library environments. Its flexibility stems from the fact that access control rules can be defined at different levels of granularity and for different XML schemas.

Research limitations/implications

This access control framework is designed to handle XML documents. Library systems that utilise bibliographic databases in other formats not easily convertible to XML would hardly integrate the framework into their environment.

Practical implications

The use of an access control enforcement framework in a bibliographic database can significantly improve the quality of data in organisations where record editing is performed by a large number of people with different skills. The examples of access control enforcement presented in this paper are extracted from the actual workflow for editing bibliographic records in the Belgrade City Library, the largest public city library in Serbia. The software implementation of the proposed framework and its integration in the BISIS library information system prove the practical usability of the framework. BISIS is currently deployed in over 40 university, public, and specialized libraries in Serbia.

Originality/value

A proposal for enforcing access control in bibliographic databases is given, and a software implementation and its integration in a library information system are presented. The proposed framework can be used in library information systems that use MARC‐based cataloguing.

Details

The Electronic Library, vol. 30 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 28 September 2012

Dimitris Kanellopoulos

This paper aims to propose a system for the semantic annotation of audio‐visual media objects, which are provided in the documentary domain. It presents the system's architecture…

Abstract

Purpose

This paper aims to propose a system for the semantic annotation of audio‐visual media objects, which are provided in the documentary domain. It presents the system's architecture, a manual annotation tool, an authoring tool and a search engine for the documentary experts. The paper discusses the merits of a proposed approach of evolving semantic network as the basis for the audio‐visual content description.

Design/methodology/approach

The author demonstrates how documentary media can be semantically annotated, and how this information can be used for the retrieval of the documentary media objects. Furthermore, the paper outlines the underlying XML schema‐based content description structures of the proposed system.

Findings

Currently, a flexible organization of documentary media content description and the related media data is required. Such an organization requires the adaptable construction in the form of a semantic network. The proposed approach provides semantic structures with the capability to change and grow, allowing an ongoing task‐specific process of inspection and interpretation of source material. The approach also provides technical memory structures (i.e. information nodes), which represent the size, duration, and technical format of the physical audio‐visual material of any media type, such as audio, video and 3D animation.

Originality/value

The proposed approach (architecture) is generic and facilitates the dynamic use of audio‐visual material using links, enabling the connection from multi‐layered information nodes to data on a temporal, spatial and spatial‐temporal level. It enables the semantic connection between information nodes using typed relations, thus structuring the information space on a semantic as well as syntactic level. Since the description of media content holds constant for the associated time interval, the proposed system can handle multiple content descriptions for the same media unit and also handle gaps. The results of this research will be valuable not only for documentary experts but for anyone with a need to manage dynamically audiovisual content in an intelligent way.

Article
Publication date: 16 February 2010

Ashley Beamer and Mark Gillick

The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves…

Abstract

Purpose

The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross‐domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.

Design/methodology/approach

In looking at the different heritage domain datasets as well as the metadata formats used for storage and data exchange, the ScotlandsPlaces XML format is revealed as the most appropriate for this type of project. The nature of the project itself and the need for dynamic web services are in turn explored.

Findings

It was found that, due to the nature of the project, the combination of a bespoke ScotlandsPlaces XML format and a set of matching web services was the best choice in terms of the retrieval of different domain datasets, as well as the desired extensible nature of the project.

Research limitations/implications

It may have proven useful to investigate the datasets of more ScotlandsPlaces partners, but as yet only a limited number of first phase partners' datasets could be studied, as the second phase of the project has yet to begin.

Originality/value

Rather than an information portal, the ScotlandsPlaces web site aggregates disparate types of record, whether site records, archival or otherwise, into a single web site and makes these records discoverable via geographical searching. Aggregated data are accessed through web service queries (using a bespoke XML format developed specifically for the project for data return) and allow partner organisations to add their datasets regardless of the organisational domain. The service also allows spatially referenced records to be plotted on to a geo‐browser via a KML file, which in turn lets users evaluate the results based on geographical location.

Details

Program, vol. 44 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 4 July 2017

Albina Kinga Moscicka

The purpose of this paper is to propose a way of using already existing archival resources in the geographic information system (GIS).

Abstract

Purpose

The purpose of this paper is to propose a way of using already existing archival resources in the geographic information system (GIS).

Design/methodology/approach

The essence of the methodology used was to identify semantic relations of archival documents with geographical space and develop their metadata into spatially related metadata, ready to use in GIS and to join geographical names occurring in these metadata with exact places to which they were related to. Research was based on two digital collections from the Library of Contemporary History in Stuttgart on-line service. These collections were related to the First World War and they included metadata prepared in MAB standard.

Findings

As the results of the research, two sample metadata sets related to posters and ration coupons were developed. Thesauruses of coordinates of places and regions mentioned in documents metadata in different semantic context were also created. To complete the methodology, the assumptions of the GIS structure and concept of applying metadata in them, have been proposed.

Research limitations/implications

The research also presents limitations in effective implementation of the proposed solutions, which lie mainly in lack of rules and consequences in recording geographical names in metadata.

Originality/value

The value of the proposed solution is easy way of using already existing data in GIS and possibilities of gathering, managing, presenting and analyzing archives with one parameter more than in traditional databases – with spatial information. The added value and an effective use of already collected data lies in the strong recommendation of defining and implementation of rules for recording geographical names in archival documents metadata. This will help in a wide use of collected data in any spatial-based solutions as well as in automation of process of joining archives with geographical space, and finally in dissemination of collected resources.

Details

Program, vol. 51 no. 2
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 4 April 2008

Sami Habib and Maytham Safar

The purpose of this paper is to propose a four‐level hierarchy model for multimedia documents representation to be used during the dynamic scheduling and altering of multimedia…

Abstract

Purpose

The purpose of this paper is to propose a four‐level hierarchy model for multimedia documents representation to be used during the dynamic scheduling and altering of multimedia contents.

Design/methodology/approach

The four‐level hierarchy model (object, operation, timing, and precedence), offers a fine‐grain representation of multimedia contents and is embedded within a research tool, which is called WEBCAP. WEBCAP utilizes the four‐level hierarchy to synchronize the retrieval of objects in the multimedia document employing Allen's temporal relations, and then applies the Bellman‐Ford's algorithm on the precedence graph to schedule all operations (fetch, transmit, process, and render), while satisfying the in‐time updating and all web workload's resources constraints.

Findings

The experimental results demonstrate the effectiveness of the model in scheduling the periodical updating multimedia documents while considering a variety of workloads on web/TCP.

Research limitations/implications

WEBCAP should be enhanced to automatically measure and/or approximate the available bandwidth of the system using sophisticated measurement of end‐to‐end connectivity. In addition, WEBCAP should be expanded and enhanced to examine system infrastructure for more real‐time applications, such as tele‐medicine and e‐learning.

Practical implications

WEBCAP can be used as an XML markup language for describing multimedia presentations. It can be used to create online presentations similar to PowerPoint on desktop environment, or used as an interactive e‐learning tool. An HTML browser may use a WEBCAP plug‐in to display a WEBCAP document embedded in an HTML/XML page.

Originality/value

This paper proposed a dynamic scheduling of multimedia documents with frequent updates taking into consideration the network's workload to reduce the packet lost ratio in the TCP flow, especially in the early stages. WEBCAP can be used to guide distributed systems designers/managers to schedule or tune their resources for optimal or near optimal performance, subject to minimizing the cost of document retrieval while satisfying the in time constraints.

Details

International Journal of Web Information Systems, vol. 4 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 September 2004

Robert Fox

In order to make competent decisions, maintain excellent collections and provide patron‐centered services, digital libraries must pay attention to how they store, manipulate and…

1777

Abstract

In order to make competent decisions, maintain excellent collections and provide patron‐centered services, digital libraries must pay attention to how they store, manipulate and reuse data. This paper examines three critical notions that address this need: data persistence, data warehousing, and data repurposing. Each of these topics is defined and examined, and examples are provided which correspond to those notions. These topics are harmonious in that they each exemplify a facet of data utilization that is central to digital content preservation, strategic planning and decision making as well as the provision of digital services, all of which are central to the mission of digital libraries and traditional libraries that increasingly deal with digital content.

Details

OCLC Systems & Services: International digital library perspectives, vol. 20 no. 3
Type: Research Article
ISSN: 1065-075X

Keywords

1 – 10 of 268