Search results

1 – 10 of over 14000
To view the access options for this content please click here
Article
Publication date: 6 November 2017

Hind Hamrouni, Fabio Grandi and Zouhaier Brahmia

A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by…

Abstract

Purpose

A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing corrective actions (e.g. payment of arrears after a retroactive salary increase) either immediately (i.e. at inconsistency detection time) or in a deferred manner, at one or several chosen repair times according to application requirements. The purpose of this work is to deal with deferred and multi-step repair of detected data inconsistencies.

Design/methodology/approach

A general approach for deferred and stepwise repair of inconsistencies that result from retroactive updates of currency data (e.g. the salary of an employee) in a valid-time or bitemporal XML database is proposed. The approach separates the inconsistency repairs from the inconsistency detection phase and deals with the execution of corrective actions, which also take into account enterprise’s business rules that define some relationships between data.

Findings

Algorithms, methods and support data structures for deferred and multi-step inconsistency repair of currency data are presented. The feasibility of the approach has been shown through the development and testing of a system prototype, named Deferred-Repair Manager.

Originality/value

The proposed approach implements a new general and flexible strategy for repairing detected inconsistencies in a deferred manner and possibly in multiple steps, according to varying user’s requirements and to specifications which are customary in the real world.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 8 June 2015

Lihua Lu, Hengzhen Zhang and Xiao-Zhi Gao

Data integration is to combine data residing at different sources and to provide the users with a unified interface of these data. An important issue on data integration…

Abstract

Purpose

Data integration is to combine data residing at different sources and to provide the users with a unified interface of these data. An important issue on data integration is the existence of conflicts among the different data sources. Data sources may conflict with each other at data level, which is defined as data inconsistency. The purpose of this paper is to aim at this problem and propose a solution for data inconsistency in data integration.

Design/methodology/approach

A relational data model extended with data source quality criteria is first defined. Then based on the proposed data model, a data inconsistency solution strategy is provided. To accomplish the strategy, fuzzy multi-attribute decision-making (MADM) approach based on data source quality criteria is applied to obtain the results. Finally, users feedbacks strategies are proposed to optimize the result of fuzzy MADM approach as the final data inconsistent solution.

Findings

To evaluate the proposed method, the data obtained from the sensors are extracted. Some experiments are designed and performed to explain the effectiveness of the proposed strategy. The results substantiate that the solution has a better performance than the other methods on correctness, time cost and stability indicators.

Practical implications

Since the inconsistent data collected from the sensors are pervasive, the proposed method can solve this problem and correct the wrong choice to some extent.

Originality/value

In this paper, for the first time the authors study the effect of users feedbacks on integration results aiming at the inconsistent data.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 8 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 6 November 2017

Mohammad Alamgir Hossain, Craig Standing and Caroline Chan

Grounded on the technology-organization-environment (TOE) framework, the purpose of this paper is to develop a two-stage model of radio frequency identification (RFID…

Abstract

Purpose

Grounded on the technology-organization-environment (TOE) framework, the purpose of this paper is to develop a two-stage model of radio frequency identification (RFID) adoption in livestock businesses. RFID adoption is divided into two stages, acceptance and extension. It is argued that RFID adoption in livestock businesses is influenced by technological (interoperability, technology readiness), organizational (readiness, market scope), and environmental (competitive market pressure, data inconsistency) factors.

Design/methodology/approach

From a qualitative field study, along with the support of existing literature, the authors developed a research model, which was then validated with survey data of 318 livestock businesses in Australia. Data analysis used partial least squares structural equation modeling.

Findings

Empirical results showed that interoperability, organizational readiness, and competitive market pressure, and data inconsistency significantly influence acceptance of RFID technology in livestock businesses. In addition, the extended use of RFID is determined mainly by interoperability, technology readiness, organizational market scope, and data inconsistency. The results suggested differential effect of data inconsistency– it had a negative influence on RFID acceptance but a positive impact on the extent of its use.

Originality/value

This is one of the first studies to examine RFID adoption as a two-stage process. The theoretical basis was based on TOE framework and the factors were developed from a field study. The results of this study will provide insights for different livestock industry including technologists, farm managers, and market players.

Details

Information Technology & People, vol. 30 no. 4
Type: Research Article
ISSN: 0959-3845

Keywords

To view the access options for this content please click here
Article
Publication date: 6 March 2017

Pei-Ju Lee, Peng-Sheng You, Yu-Chih Huang and Yi-Chih Hsieh

The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search…

Abstract

Purpose

The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search. Moreover, users who issue the query may not learn of this inconsistency even after a data cleaning process (e.g. schema matching or data screening). The inconsistency can exist in different types of data, such as temporal or spatial data. Therefore, this paper aims to introduce an information fusion method that can detect data inconsistency in the early stages of data fusion.

Design/methodology/approach

This paper introduces an information fusion method for multi-robot operations, for which fusion is conducted continuously. When the environment is explored by multiple robots, the robot logs can provide more information about the number and coordination of targets or victims. The information fusion method proposed in this paper generates an underdetermined linear system of overlapping spatial reports and estimates the case values. Then, the least squares method is used for the underdetermined linear system. By using these two methods, the conflicts between reports can be detected and the values of the intervals at specific times or locations can be estimated.

Findings

The proposed information fusion method was tested for inconsistency detection and target projection of spatial fusion in sensor networks. The proposed approach examined the values of sensor data from simulation that robots perform search tasks. This system can be expanded to data warehouses with heterogeneous data sources to achieve completeness, robustness and conciseness.

Originality/value

Little research has been devoted to the linear systems for information fusion of tasks of mobile robots. The proposed information fusion method minimizes the cost of time and comparison for data fusion and also minimizes the probability of errors from incorrect results.

Details

Engineering Computations, vol. 34 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 20 February 2009

Jaroslaw Woznica and Ken Healy

This paper seeks to investigate the role of information systems integration in Irish small and medium‐sized enterprises operating in the manufacturing sector.

Abstract

Purpose

This paper seeks to investigate the role of information systems integration in Irish small and medium‐sized enterprises operating in the manufacturing sector.

Design/methodology/approach

Research was conducted through a review of literature and subsequent primary research involving qualitative (semi‐structured interviews) and quantitative (questionnaires) research strategies.

Findings

The paper reveals the sophistication of internal IT infrastructure within Irish manufacturing SMEs and whether the IT systems are integrated with one another, and, if so, how well that integration is done. Moreover, the owner‐managers' and senior managers' attitude to IS integration issues is explored, including the reasons that prompt them to integrate IT systems within their businesses, their expectations of IS integration, the challenges they recognise when integrating the systems and their criteria regarding IS integration.

Research limitations/implications

The research focuses on manufacturing SMEs operating in Ireland; other sectors are not investigated.

Practical implications

The paper helps the owner‐managers and senior managers to understand the issues of IS integration and points towards possible solutions to the problem of disparate IT systems.

Originality/value

The negative impact of disparate systems and the benefits of integrating them in an SMEs environment have not been thoroughly examined to date.

Details

Journal of Small Business and Enterprise Development, vol. 16 no. 1
Type: Research Article
ISSN: 1462-6004

Keywords

To view the access options for this content please click here
Article
Publication date: 22 November 2011

Jörg Waitelonis, Nadine Ludwig, Magnus Knuth and Harald Sack

Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the…

Abstract

Purpose

Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the help of automated methods. Data cleansing approaches enable detection of inconsistencies and overhauling of affected data sets, but they are difficult to apply automatically. The purpose of this paper is to present WhoKnows?, an online quiz that generates different kinds of questionnaires from DBpedia data sets.

Design/methodology/approach

Besides its playfulness, WhoKnows? has been developed for the evaluation of property relevance ranking heuristics on DBpedia data, with the convenient side effect of detecting inconsistencies and doubtful facts.

Findings

The original purpose for developing WhoKnows? was to evaluate heuristics to rank LOD properties and thus, obtain a semantic relatedness between entities according to the properties by which they are linked. The presented approach is an efficient method to detect popular properties within a limited amount of triples. Ongoing work continues in the development of sound property ranking heuristics for the purpose of detecting the most relevant characteristics of entities.

Originality/value

WhoKnows? uses the approach of “Games with a Purpose” to detect inconsistencies in Linked Data and score properties to rank them for sophisticated semantic search scenarios.

Details

Interactive Technology and Smart Education, vol. 8 no. 4
Type: Research Article
ISSN: 1741-5659

Keywords

To view the access options for this content please click here
Article
Publication date: 7 June 2013

Alison Graber, Stephanie Alexander, Megan Bresnahan and Jennie Gerke

Reference data collection tools facilitate the collection of in‐depth data about reference interactions. Since this information may influence decisions, library managers…

Abstract

Purpose

Reference data collection tools facilitate the collection of in‐depth data about reference interactions. Since this information may influence decisions, library managers should examine how these tools are used and assess how these data entry behaviors may impact the accuracy of the data. This paper aims to analyze reference staff perceptions and data entry behaviors using a reference data collection tool.

Design/methodology/approach

The two‐year mixed method study analyses reference staff perceptions and data entry behaviors related to the reference data collection tool used at the University of Colorado Boulder Libraries. The authors identified six distinct data entry behaviors for analysis in this study.

Findings

The survey results indicate that staff consider the tool to be both easy to use and useful. These findings, under the technology acceptance model, indicate technology acceptance, which influences adoption and use of the tool. Though rates of adoption and use of the tool are high, the authors' analysis of behaviors indicate that not all users record reference interactions in the same way, and this inconsistency may impact the accuracy of collected data.

Practical implications

Inconsistency in data entry behaviors should inform the design of staff training sessions, best practice guidelines, and the tool's interface.

Social implications

If data are used to justify changes to services and collections, decision makers need to be confident that data accurately reflect activity at library service points.

Originality/value

Previous studies related to reference data collection mention the importance of consistent data entry practices, but no studies have explicitly evaluated how inconsistencies in use may impact the accuracy of data.

Details

Reference Services Review, vol. 41 no. 2
Type: Research Article
ISSN: 0090-7324

Keywords

To view the access options for this content please click here
Article
Publication date: 25 October 2019

Eunhwa Yang and Ipsitha Bayapu

This paper aims to investigate data elements, transfer, gaps and the challenges to implement data analytics in facilities management. The goal is not to search for a…

Abstract

Purpose

This paper aims to investigate data elements, transfer, gaps and the challenges to implement data analytics in facilities management. The goal is not to search for a definite solution but to gather necessary information, understand the challenges faced and develop a proper foundation for future study.

Design/methodology/approach

This paper used a case study approach with a qualitative method. The case of the Georgia Institute of Technology was investigated by having a semi-structured interview with six relevant personnel. The recorded interview content was analyzed and presented based on six work processes.

Findings

Higher education institutions are taking initiatives but facing challenges in implementing data analytics. There were 36 software tools used to manage different aspects of facilities at Georgia Tech. Identified data elements and data processing indicated that major challenges for data-driven decision-making were inconsistency in data input and structure, the issue of interoperability among different software tools and a lack of software training.

Research limitations/implications

The authors only interviewed individuals who work closely with data gathering, transfer and processing. Thus, the study did not explore the perspective of individuals in the leadership level or the user group level.

Originality/value

Facilities management departments in higher education institutions perform multi-disciplinary functions, including building automation, continuous commissioning and preventative maintenance, all of which are data- and technology-intensive. Managing this overwhelming amount of information is often a challenge, but well-planned data analytics can be used to draw keen insights about any aspect of facilities management and operations and assist in evidence-based decision-making.

Details

Facilities , vol. 38 no. 3/4
Type: Research Article
ISSN: 0263-2772

Keywords

To view the access options for this content please click here
Article
Publication date: 15 January 2019

Elizabeth Shepherd, Jenny Bunn, Andrew Flinn, Elizabeth Lomas, Anna Sexton, Sara Brimble, Katherine Chorley, Emma Harrison, James Lowry and Jessica Page

Open government data and access to public sector information is commonplace, yet little attention has focussed on the essential roles and responsibilities in practice of…

Abstract

Purpose

Open government data and access to public sector information is commonplace, yet little attention has focussed on the essential roles and responsibilities in practice of the information and records management professionals, who enable public authorities to deliver open data to citizens. This paper aims to consider the perspectives of open government and information practitioners in England on the procedural and policy implications of open data across local public authorities.

Design/methodology/approach

Using four case studies from different parts of the public sector in England (local government, higher education, National Health Service and hospital trust), the research involved master’s level students in the data collection and analysis, alongside academics, thus enhancing the learning experience of students.

Findings

There was little consistency in the location of responsibility for open government data policy, the range of job roles involved or the organisational structures, policy and guidance in place to deliver this function. While this may reflect the organisational differences and professional concerns, it makes it difficult to share best practice. Central government policy encourages public bodies to make their data available for re-use. However, local practice is very variable and perhaps understandably responds more to local organisational strategic and resource priorities. The research found a lack of common metadata standards for open data, different choices about which data to open, problems of data redundancy, inconsistency and data integrity and a wide variety of views on the corporate and public benefits of open data.

Research limitations/implications

The research is limited to England and to non-national public bodies and only draws data from a small number of case studies.

Originality/value

The research contributes to the debate about emerging issues around the complexities of open government data and its public benefits, contributing to the discussions around technology-enabled approaches to citizen engagement and governance. It offers new insights into the interaction between open data and public policy objectives, drawing on the experience of local public sectors in England.

Details

Records Management Journal, vol. 29 no. 1/2
Type: Research Article
ISSN: 0956-5698

Keywords

To view the access options for this content please click here
Article
Publication date: 1 November 2003

Marvin L. Brown and John F. Kros

The actual data mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Therefore…

Abstract

The actual data mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Therefore, the significance of the analysis depends heavily on the accuracy of the database and on the chosen sample data to be used for model training and testing. Data mining is based upon searching the concatenation of multiple databases that usually contain some amount of missing data along with a variable percentage of inaccurate data, pollution, outliers and noise. The issue of missing data must be addressed since ignoring this problem can introduce bias into the models being evaluated and lead to inaccurate data mining conclusions. The objective of this research is to address the impact of missing data on the data mining process.

Details

Industrial Management & Data Systems, vol. 103 no. 8
Type: Research Article
ISSN: 0263-5577

Keywords

1 – 10 of over 14000