Search results
21 – 30 of over 14000Relational Database Management Systems (RDBMS) have a proven track record for storing and managing many different forms of digital content, and new strategies have been defined to…
Abstract
Relational Database Management Systems (RDBMS) have a proven track record for storing and managing many different forms of digital content, and new strategies have been defined to provide RDBMS‐based solutions for XML. Some relational databases now offer special mechanisms to accommodate XML while several technologies have emerged to facilitate the use of XML representations of data housed within an RDBMS. In addition to presenting challenges and opportunities to RDBMS developers, XML and XML‐enabled technologies may find new application for libraries by combining RDBMS concepts with Web‐based services.
Details
Keywords
Operation and maintenance (O&M) processes projects such as identification, assessment, planning and execution, embody a variety of standards such as technical (method of…
Abstract
Purpose
Operation and maintenance (O&M) processes projects such as identification, assessment, planning and execution, embody a variety of standards such as technical (method of statement), environmental, economic (campus development) and social (health and wellbeing). Because these standards have proven to be challenging to integrate, local governments are increasingly experimenting with social innovation (SI) as a bottom-up form of standard integration. This study aims to apply the concept of SI to the O&M processes of facilities management at polytechnics in Malaysia to identify problems with conventional working practices in this area and to recommend potential solutions.
Design/methodology/approach
The paper reviews evidence that conventional working methods generate significant problems related to paper-based forms, improper database management and flawed decision-making processes. Because of the lack knowledge about different ways of how standard integration is achieved, the comparison of three polytechnic institutions which are Rensselaer Polytechnic Institute (RPI) and Southern Polytechnic College of Engineering and Engineering Technology (SPCEET) in USA as well as Seberang Perai Polytechnic, Pulau Pinang (PSP) in Malaysia shares the ambition to realise standard integration of O&M through SI.
Findings
The findings reveal that SI leads to four ways of standard integration: computerised maintenance management system, online customer complaint, electronic form and relational database. Application of the concept of SI reveals the need for more sophisticated management solutions in the O&M processes of facilities management.
Originality/value
These standard integration arrangements unfortunately seem to mainly contribute to greater alignment between standard rather than true standard integration. The concept of SI will guide future improvements and developments in maintenance management systems to fulfil requirements in this area.
Details
Keywords
Jia‐Lang Seng, Yu Lin, Jessie Wang and Jing Yu
XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to…
Abstract
XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to play the supportive role. Web and wireless applications master the technology paradigm shift. XML and database connectivity and transformation become critical. Heterogeneity and interoperability must be distinctly tackled. In this paper, we provide an in‐depth and technical review of XML and XML database technology. An analytic and comparative framework is developed. Storage method, mapping technique, and transformation paradigm formulate the framework. We collect and compile the IBM, Oracle, Sybase, and Microsoft XML database products. We use the framework and analyze each of these XML database techniques. The comparison and contrast aims to provide an insight into the structural and methodological paradigm shift in XML database technology.
Details
Keywords
Abbas Tarhini, Manal Yunis and Abdul-Nasser El-Kassar
The purpose of this paper is to present an innovative agile methodology that proposes fundamental changes in managing the development of in-house information systems in small- and…
Abstract
Purpose
The purpose of this paper is to present an innovative agile methodology that proposes fundamental changes in managing the development of in-house information systems in small- and medium-sized enterprises (SMEs) and benchmarks it with one of two database technologies enabling these systems to be both efficient and competitive.
Design/methodology/approach
The objectives are achieved by presenting an elaborated design of the agile methodology that manages the system development process by addressing three basic components: roles played by system players, process needed to fulfill the system development, and artifacts to document the project. A case study is conducted as a proof of the effectiveness of the proposed methodology and measures whether the selection of the database technology affects the effectiveness of the system development process.
Findings
Results show that, compared with traditional methodologies, the proposed methodology reduced the cost of system development and testing by 30 percent and enhanced the IT – business alliance. Further, this work found that the selection of a suitable database technology is strongly related to the complexity and interrelationships between the data used.
Originality/value
Such research did not receive the needed attention (Hunter, 2004) even in the past decade. Successful adoption of IT by companies could be in the form of customized IS which could be expensive for SMEs to adopt due to a lack in technical expertise and financial resources. The proposed methodology has the potential to promote sustainable development through helping SMEs in reducing the time and cost of IT project development.
Details
Keywords
Pavel Kostelník and František Dařena
Current possibilities of accessing business data by regular users usually involve complicated user interfaces or require technical expertise. This results in situations when…
Abstract
Purpose
Current possibilities of accessing business data by regular users usually involve complicated user interfaces or require technical expertise. This results in situations when business owners are separated from their data. The aim of this research is to apply an innovative approach leveraging conversational interfaces to tackle this problem.
Design/methodology/approach
The authors examine the current possibilities of accessing business data by business, users with an emphasis on conversational interfaces employing a chatbot as an alternative to traditional approaches. The authors propose a new concept relying on a guided conversation, and through experiments with a real chatbot and database, the authors demonstrate the benefits of the proposed approach.
Findings
The authors found out that the key to the success of our approach is a decomposition of complex database queries and their incremental construction in conversations. This also enables natural discovery of the domain model through constantly provided feedback. Based on the experiments with a real chatbot, the authors demonstrate that defining conversation flows and maintaining the conversation context is a crucial aspect contributing to the overall accuracy, together with keeping the conversation within the defined limits in its certain parts.
Originality/value
The authors present a novel approach using natural language interfaces for accessing data by business users. In contrast to existing approaches, the authors emphasize incremental construction of queries, predefined conversation flows and constraining the conversations, when necessary.
Details
Keywords
Sandeep Kumar Singh and Mamata Jenamani
The purpose of this paper is to design a supply chain database schema for Cassandra to store real-time data generated by Radio Frequency IDentification technology in a…
Abstract
Purpose
The purpose of this paper is to design a supply chain database schema for Cassandra to store real-time data generated by Radio Frequency IDentification technology in a traceability system.
Design/methodology/approach
The real-time data generated in such traceability systems are of high frequency and volume, making it difficult to handle by traditional relational database technologies. To overcome this difficulty, a NoSQL database repository based on Casandra is proposed. The efficacy of the proposed schema is compared with two such databases, document-based MongoDB and column family-based Cassandra, which are suitable for storing traceability data.
Findings
The proposed Cassandra-based data repository outperforms the traditional Structured Query Language-based and MongoDB system from the literature in terms of concurrent reading, and works at par with respect to writing and updating of tracing queries.
Originality/value
The proposed schema is able to store the real-time data generated in a supply chain with low latency. To test the performance of the Cassandra-based data repository, a test-bed is designed in the lab and supply chain operations of Indian Public Distribution System are simulated to generate data.
Details
Keywords
Olusegun Folorunso and AdioTaofeek Akinwale
In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization…
Abstract
Purpose
In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands‐on experience in database normalization process.
Design/methodology/approach
The model‐view‐controller architecture is used to alleviate the black box syndrome associated with the study of algorithm behavior for database normalization process. The authors propose a visualization “exploratory” tool that assists the learners in understanding the actual behavior of the database normalization algorithms of choice and also in evaluating the validity/quality of the algorithm. This paper describes the visualization tool and its effectiveness in teaching and learning normalization forms and their functional dependencies.
Findings
The effectiveness of the tool has been evaluated in surveys. It shows that students generally viewed the tool more positively than the textbook technique. This difference is significant to p<0.05 (t=1.645). The mean interactions precision and calculated value using expert judge relevance ratings show a significant difference between visualization tool and textbook performance 3.74 against 2.61 for precision with calculated t=6.69.
Originality/value
The visualization tool helped students validate/check their learning of normalization process. Consequently, the paper shows that the tool has a positive impact on students' perception.
Details
Keywords
Fan Yu, Junping Qiu and Wen Lou
This paper aims to solve the disadvantages of content-based domain ontology (CBDO) and metadata-based domain ontology (MDO) and improve organization and discovery efficiency of…
Abstract
Purpose
This paper aims to solve the disadvantages of content-based domain ontology (CBDO) and metadata-based domain ontology (MDO) and improve organization and discovery efficiency of library resources by resource ontology (RO).
Design/methodology/approach
The paper constructed an RO model. Methods of informetrics are utilized to reveal semantic relationships among library resources. Methods of ontology, ontology-relational database mapping (O-R mapping) and relational database modelling are utilized to construct RO. Take author co-occurrence for example, the paper demonstrated the capability of RO model.
Findings
RO not only revealed the deep-level semantic relationships of metadata of library resources but also realized totally computer-automated processing. RO improved the efficiency of knowledge organization and discovery.
Research limitations/implications
Semantic relationships revealed by RO are limited to simple metadata, which makes it difficult to reveal fine-grained semantic relationships. Ongoing research focuses on the revelation of semantic relationships based on the title and abstract.
Practical implications
The paper includes implications for utilizing methods of Informetrics to construct ontology.
Originality/value
This paper proposed a standardized process of ontology construction in library resources. It may be of potential interest for anyone who needs to effectively organize library resources.
Details
Keywords
As the number of online journals, databases, and indexing and abstracting services continues to grow on the Internet, it is important that libraries find efficient ways to manage…
Abstract
As the number of online journals, databases, and indexing and abstracting services continues to grow on the Internet, it is important that libraries find efficient ways to manage and provide access to these resources. By utilizing database driven dynamic content delivery technology, library Web administrators can obtain numerous management benefits over a static HTML site. Presents an efficient model using Microsoft Access database software and an ASP (Active Server Pages) scripting method to manage and deliver the University of Arkansas Library’s electronic subscription services. Benefits include centralized data management and maintenance, streamlined administration, customized content, and improved response to simultaneous user access. Web server platforms, programming skill levels, and data storage limitations are also discussed. A single MS Access database utilizing two relational tables is used as an example to demonstrate the underlying database organization.
Details
Keywords
Mustafa Aljumaili, Ramin Karim and Phillip Tretten
The purpose of this paper is to develop data quality (DQ) assessment model based on content analysis and metadata analysis.
Abstract
Purpose
The purpose of this paper is to develop data quality (DQ) assessment model based on content analysis and metadata analysis.
Design/methodology/approach
A literature review of DQ assessment models has been conducted. A study of DQ key performances (KPIs) has been done. Finally, the proposed model has been developed and applied in a case study.
Findings
The results of this study shows that the metadata data have important information about DQ in a database and can be used to assess DQ to provide decision support for decision makers.
Originality/value
There is a lot of DQ assessment in the literature; however, metadata are not considered in these models. The model developed in this study is based on metadata in addition to the content analysis, to find a quantitative DQ assessment.
Details