Search results
1 – 10 of over 32000Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an…
Abstract
Purpose
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.
Design/methodology/approach
It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.
Findings
It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.
Research limitations/implications
The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.
Originality/value
The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.
Details
Keywords
Yanjun Zuo and Brajendra Panda
Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system is rather…
Abstract
Purpose
Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system is rather complicated due to the indirect dependencies among sub‐transactions executed at different sites. Hence, the damage assessment procedure in these systems must be carried out in a collaborative way among all the participating sites in order to accurately detect all affected data items. This paper seeks to propose two approaches for achieving this, namely, centralized and peer‐to‐peer damage assessment models.
Design/methodology/approach
Each of the two proposed methods should be applied immediately after an intrusion on a distributed database system was reported. In the category of the centralized model, three sub‐models are further discussed, each of which is best suitable for a certain type of situations in a distributed database system.
Findings
Advantages and disadvantages of the models are analyzed on a comparative basis and the most suitable situations to which each model should apply are presented. A set of algorithms is developed to formally describe the damage assessment procedure for each model (sub‐model). Synchronization is essential in any system where multiple processes run concurrently. User‐level synchronization mechanisms have been presented to ensure that the damage assessment operations are conducted in a correct order.
Originality/value
The paper proposes two means for damage assessment.
Details
Keywords
A framework based on the database entropy concept developed for characterizing the learning process of a database is applied to the dynamic measurement of the promptness and…
Abstract
A framework based on the database entropy concept developed for characterizing the learning process of a database is applied to the dynamic measurement of the promptness and coherence of a distributed database. The results can be of great value in the design and implementation of self‐adaptive distributed databases.
Siva Ganapathy Subramanian Manoharan, Rajalakshmi Subramaniam and Sanjay Mohapatra
Alfred Loo and Y.K. Choi
Heretofore, it has been extremely expensive to install and use distributed databases. With the advent of Java, JDBC and other Internet technologies, it has become easy and…
Abstract
Heretofore, it has been extremely expensive to install and use distributed databases. With the advent of Java, JDBC and other Internet technologies, it has become easy and inexpensive to connect multiple databases and form distributed databases, even where the various host computers run on different platforms. These types of databases can be used in many peer‐to‐peer applications which are now receiving much attention from researchers. Although it is easy to form a distributed database via Internet/intranet, effective sharing of information continues to be problematic. We need to pay more attention to the enabling algorithms, as dedicated links between computers are usually not available in peer‐to‐peer systems. The lack of dedicated links can cause poor performance, especially if the databases are connected via Internet. Discusses the problems of distributed database operation with reference to an example. Presents two statistical selection algorithms which are designed to select the jth smallest key from a very large file distributed over many computers. The objective of these algorithms is to minimise the number of communication messages necessary to the selection operation. One algorithm is for the intranet with broadcast/multicast facilities while the other is for Internet without broadcast/multicast facilities.
Details
Keywords
Gerti Kappel and Stefan Vieweg
Changes in market and production profiles require a more flexibleconcept in manufacturing. Computer integrated manufacturing (CIM)describes an integrative concept for joining…
Abstract
Changes in market and production profiles require a more flexible concept in manufacturing. Computer integrated manufacturing (CIM) describes an integrative concept for joining business and manufacturing islands. In this context, database technology is the key technology for implementing the CIM philosophy. However, CIM applications are more complex and thus more demanding than traditional database applications such as business and administrative applications. Systematically analyses the database requirements for CIM applications including business and manufacturing tasks. Special emphasis is given on integration requirements due to the distributed, partly isolated nature of CIM applications developed over the years. An illustrative sampling of current efforts in the database community to meet the challenge of non‐standard applications such as CIM is presented.
Details
Keywords
A. Macfarlane, S.E. Robertson and J.A. Mccann
The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text…
Abstract
The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text retrieval. We analyse parallel IR systems using a classification defined by Rasmussen and describe some parallel IR systems. We give a description of the retrieval models used in parallel information processing. We describe areas of research which we believe are needed.
Details
Keywords
K.K.S. Sarinder, L.H.S. Lim, A.F. Merican and K. Dimyati
Biodiversity resources are inevitably digital and stored in a wide variety of formats by researchers or stakeholders. In Malaysia, although digitizing biodiversity data has long…
Abstract
Purpose
Biodiversity resources are inevitably digital and stored in a wide variety of formats by researchers or stakeholders. In Malaysia, although digitizing biodiversity data has long been stressed, the interoperability of the biodiversity data is still an issue that requires attention. This is because, when data are shared, the question of copyright occurs, creating a setback among researchers wanting to promote or share data through online presentations. To solve this, the aim is to present an approach to integrate data through wrapping of datasets stored in relational databases located on networked platforms.
Design/methodology/approach
The approach uses tools such as XML, PHP, ASP and HTML to integrate distributed databases in heterogeneous formats. Five current database integration systems were reviewed and all of them have common attributes such as query‐oriented, using a mediator‐based approach and integrating a structured data model. These common attributes were also adopted in the proposed solution. Distributed Generic Information Retrieval (DiGIR) was used as a model in designing the proposed solution.
Findings
A new database integration system was developed, which is user‐friendly and simple with common attributes found in current integration systems.
Originality/value
The proposed system is unique in that it allows biodiversity data sharing, through the integration of biodiversity databases, hence enabling scientists to share information and generate knowledge. It also solves copyright problems by suggesting distributed warehouses, giving data owners the benefit of having their database under their own jurisdiction. It meets the requirements of querying heterogeneous and remote biodiversity databases.
Details