Search results
1 – 10 of over 2000Hind Hamrouni, Fabio Grandi and Zouhaier Brahmia
A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing…
Abstract
Purpose
A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing corrective actions (e.g. payment of arrears after a retroactive salary increase) either immediately (i.e. at inconsistency detection time) or in a deferred manner, at one or several chosen repair times according to application requirements. The purpose of this work is to deal with deferred and multi-step repair of detected data inconsistencies.
Design/methodology/approach
A general approach for deferred and stepwise repair of inconsistencies that result from retroactive updates of currency data (e.g. the salary of an employee) in a valid-time or bitemporal XML database is proposed. The approach separates the inconsistency repairs from the inconsistency detection phase and deals with the execution of corrective actions, which also take into account enterprise’s business rules that define some relationships between data.
Findings
Algorithms, methods and support data structures for deferred and multi-step inconsistency repair of currency data are presented. The feasibility of the approach has been shown through the development and testing of a system prototype, named Deferred-Repair Manager.
Originality/value
The proposed approach implements a new general and flexible strategy for repairing detected inconsistencies in a deferred manner and possibly in multiple steps, according to varying user’s requirements and to specifications which are customary in the real world.
Details
Keywords
Jörg Waitelonis, Nadine Ludwig, Magnus Knuth and Harald Sack
Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the help of…
Abstract
Purpose
Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the help of automated methods. Data cleansing approaches enable detection of inconsistencies and overhauling of affected data sets, but they are difficult to apply automatically. The purpose of this paper is to present WhoKnows?, an online quiz that generates different kinds of questionnaires from DBpedia data sets.
Design/methodology/approach
Besides its playfulness, WhoKnows? has been developed for the evaluation of property relevance ranking heuristics on DBpedia data, with the convenient side effect of detecting inconsistencies and doubtful facts.
Findings
The original purpose for developing WhoKnows? was to evaluate heuristics to rank LOD properties and thus, obtain a semantic relatedness between entities according to the properties by which they are linked. The presented approach is an efficient method to detect popular properties within a limited amount of triples. Ongoing work continues in the development of sound property ranking heuristics for the purpose of detecting the most relevant characteristics of entities.
Originality/value
WhoKnows? uses the approach of “Games with a Purpose” to detect inconsistencies in Linked Data and score properties to rank them for sophisticated semantic search scenarios.
Details
Keywords
Mihaela Dinsoreanu and Rodica Potolea
The purpose of this paper is to address the challenge of opinion mining in text documents to perform further analysis such as community detection and consistency control. More…
Abstract
Purpose
The purpose of this paper is to address the challenge of opinion mining in text documents to perform further analysis such as community detection and consistency control. More specifically, we aim to identify and extract opinions from natural language documents and to represent them in a structured manner to identify communities of opinion holders based on their common opinions. Another goal is to rapidly identify similar or contradictory opinions on a target issued by different holders.
Design/methodology/approach
For the opinion extraction problem we opted for a supervised approach focusing on the feature selection problem to improve our classification results. On the community detection problem, we rely on the Infomap community detection algorithm and the multi-scale community detection framework used on a graph representation based on the available opinions and social data.
Findings
The classification performance in terms of precision and recall was significantly improved by adding a set of “meta-features” based on grouping rules of certain part of speech (POS) instead of the actual words. Concerning the evaluation of the community detection feature, we have used two quality metrics: the network modularity and the normalized mutual information (NMI). We evaluated seven one-target similarity functions and ten multi-target aggregation functions and concluded that linear functions perform poorly for data sets with multiple targets, while functions that calculate the average similarity have greater resilience to noise.
Originality/value
Although our solution relies on existing approaches, we managed to adapt and integrate them in an efficient manner. Based on the initial experimental results obtained, we managed to integrate original enhancements to improve the performance of the obtained results.
Details
Keywords
Edelweis Rohrer, Regina Motz and Alicia Diaz
Web site recommendation systems help to get high quality information. The modelling of recommendation systems involves the combination of many features: metrics of quality…
Abstract
Purpose
Web site recommendation systems help to get high quality information. The modelling of recommendation systems involves the combination of many features: metrics of quality, quality criteria, recommendation criteria, user profile, and specific domain concepts, among others. At the moment of the specification of a recommendation system it must be guaranteed a right interrelation of all of these features. The purpose of this paper is to model a web site quality‐based recommendation system by an ontology network.
Design/methodology/approach
In this paper, the authors propose an ontology network based process for web site recommendation modelling. The ontology network conceptualizes the different domains (web site domain, quality assurance domain, user context domain, recommendation criteria domain, specific domain) in a set of interrelated ontologies. Particularly, this approach is illustrated for the health domain.
Findings
Basically, this work introduces the semantic relationships that were used to construct this ontology network. Moreover, it shows the usefulness of this ontology network for the detection of possible inconsistencies when specifying recommendation criteria.
Originality/value
Recommendation systems based on ontologies that model the user profile and the domain of resources to be recommended are quite common. However, it is uncommon to find models that explicitly represent the criteria used by the recommender systems, that express the quality dimensions of resources and on which criteria are applied, and consider the user context at the moment of the query.
Details
Keywords
Pei-Ju Lee, Peng-Sheng You, Yu-Chih Huang and Yi-Chih Hsieh
The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search. Moreover, users…
Abstract
Purpose
The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search. Moreover, users who issue the query may not learn of this inconsistency even after a data cleaning process (e.g. schema matching or data screening). The inconsistency can exist in different types of data, such as temporal or spatial data. Therefore, this paper aims to introduce an information fusion method that can detect data inconsistency in the early stages of data fusion.
Design/methodology/approach
This paper introduces an information fusion method for multi-robot operations, for which fusion is conducted continuously. When the environment is explored by multiple robots, the robot logs can provide more information about the number and coordination of targets or victims. The information fusion method proposed in this paper generates an underdetermined linear system of overlapping spatial reports and estimates the case values. Then, the least squares method is used for the underdetermined linear system. By using these two methods, the conflicts between reports can be detected and the values of the intervals at specific times or locations can be estimated.
Findings
The proposed information fusion method was tested for inconsistency detection and target projection of spatial fusion in sensor networks. The proposed approach examined the values of sensor data from simulation that robots perform search tasks. This system can be expanded to data warehouses with heterogeneous data sources to achieve completeness, robustness and conciseness.
Originality/value
Little research has been devoted to the linear systems for information fusion of tasks of mobile robots. The proposed information fusion method minimizes the cost of time and comparison for data fusion and also minimizes the probability of errors from incorrect results.
Details
Keywords
This paper provides an introduction to research in the field of image forensics and asks whether advances in the field of algorithm development and digital forensics will…
Abstract
Purpose
This paper provides an introduction to research in the field of image forensics and asks whether advances in the field of algorithm development and digital forensics will facilitate the examination of images in the scientific publication process in the near future.
Design/methodology/approach
This study looks at the status quo of image analysis in the peer review process and evaluates selected articles from the field of Digital Image and Signal Processing that have addressed the discovery of copy-move, cut-paste and erase-fill manipulations.
Findings
The article focuses on forensic research and shows that, despite numerous efforts, there is still no applicable tool for the automated detection of image manipulation. Nonetheless, the status quo for examining images in scientific publications remains visual inspection and will likely remain so for the foreseeable future. This study summarizes aspects that make automated detection of image manipulation difficult from a forensic research perspective.
Research limitations/implications
Results of this study underscore the need for a conceptual reconsideration of the problems involving image manipulation with a view toward the need for interdisciplinary collaboration in conjunction with library and information science (LIS) expertise on information integrity.
Practical implications
This study not only identifies a number of conceptual challenges but also suggests areas of action that the scientific community can address in the future.
Originality/value
Image manipulation is often discussed in isolation as a technical challenge. This study takes a more holistic view of the topic and demonstrates the necessity for a multidisciplinary approach.
Details
Keywords
Emad Khorshid, Abdulaziz Alfadli and Abdulazim Falah
The purpose of this paper is to present numerical experimentation of three constraint detection methods to explore their main features and drawbacks in infeasibility detection…
Abstract
Purpose
The purpose of this paper is to present numerical experimentation of three constraint detection methods to explore their main features and drawbacks in infeasibility detection during the design process.
Design/methodology/approach
Three detection methods (deletion filter, additive method and elasticity method) are used to find the minimum intractable subsystem of constraints in conflict. These methods are tested with four enhanced NLP solvers (sequential quadratic program, multi-start sequential quadratic programing, global optimization solver and genetic algorithm method).
Findings
The additive filtering method with both the multistart sequential quadratic programming and the genetic algorithm solvers is the most efficient method in terms of computation time and accuracy of detecting infeasibility. Meanwhile, the elasticity method has the worst performance.
Research limitations/implications
The research has been carried out for only inequality constraints and continuous design variables. This research work could be extended to develop computer-aided graphical user interface with the capability of including equality constraints and discrete variables.
Practical implications
These proposed methods have great potential for finding and guiding the designer to detect the infeasibility for ill-posed complex design problems.
Originality/value
The application of the proposed infeasibility detection methods with their four enhanced solvers on several mechanical design problems reduces the number of constraints to be checked from full set to a much smaller subset.
Details
Keywords
Kerstin Altmanninger, Martina Seidl and Manuel Wimmer
The purpose of this paper is to provide a feature‐based characterization of version control systems (VCSs), providing an overview about the state‐of‐the‐art of versioning systems…
Abstract
Purpose
The purpose of this paper is to provide a feature‐based characterization of version control systems (VCSs), providing an overview about the state‐of‐the‐art of versioning systems dedicated to modeling artifacts.
Design/methodology/approach
Based on a literature study of existing approaches, a description of the features of versioning systems is established. Special focus is set on three‐way merging which is an integral component of optimistic versioning. This characterization is employed on current model versioning systems, which allows the derivation of challenges in this research area.
Findings
The results of the evaluation show that several challenges need to be addressed in future developments of VCSs and merging tools in order to allow the parallel development of model artifacts.
Practical implications
Making model‐driven engineering (MDE) a success requires supporting the parallel development of model artifacts as is done nowadays for text‐based artifacts. Therefore, model versioning capabilities are a must for leveraging MDE in practice.
Originality/value
The paper gives a comprehensive overview of collaboration features of VCSs for software engineering artifacts in general, discusses the state‐of‐the‐art of systems for model artifacts, and finally, lists urgent challenges, which have to be considered in future model versioning system for realizing MDE in practice.
Details
Keywords
Ian Stott, David Sanders and Giles Tewkesbury
Describes a new reliable low‐cost ultrasonic ranging system to assist in steering a powered wheelchair. Detection algorithms have been created and implemented on a micro…
Abstract
Describes a new reliable low‐cost ultrasonic ranging system to assist in steering a powered wheelchair. Detection algorithms have been created and implemented on a micro controller based stand‐alone system suitable for a tele‐operated vehicle. The detection uses the gradient of the echo envelope and is resistant to noise and inconsistencies in the detection circuitry. The sensor array was considered as separate sensors, working independently so the system could quickly gather separate sets of range information. These sets were overlaid on to a 2D grid array. The new system is cheaper and simpler than available systems for powered wheelchairs.
Details
Keywords
Lukman E. Mansuri and D.A. Patel
Heritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many…
Abstract
Purpose
Heritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many social and natural factors seriously threaten heritage structures by deteriorating and damaging the original. Therefore, regular visual inspection of heritage structures is necessary for their conservation and preservation. Conventional inspection practice relies on manual inspection, which takes more time and human resources. The inspection system seeks an innovative approach that should be cheaper, faster, safer and less prone to human error than manual inspection. Therefore, this study aims to develop an automatic system of visual inspection for the built heritage.
Design/methodology/approach
The artificial intelligence-based automatic defect detection system is developed using the faster R-CNN (faster region-based convolutional neural network) model of object detection to build an automatic visual inspection system. From the English and Dutch cemeteries of Surat (India), images of heritage structures were captured by digital camera to prepare the image data set. This image data set was used for training, validation and testing to develop the automatic defect detection model. While validating this model, its optimum detection accuracy is recorded as 91.58% to detect three types of defects: “spalling,” “exposed bricks” and “cracks.”
Findings
This study develops the model of automatic web-based visual inspection systems for the heritage structures using the faster R-CNN. Then it demonstrates detection of defects of spalling, exposed bricks and cracks existing in the heritage structures. Comparison of conventional (manual) and developed automatic inspection systems reveals that the developed automatic system requires less time and staff. Therefore, the routine inspection can be faster, cheaper, safer and more accurate than the conventional inspection method.
Practical implications
The study presented here can improve inspecting the built heritages by reducing inspection time and cost, eliminating chances of human errors and accidents and having accurate and consistent information. This study attempts to ensure the sustainability of the built heritage.
Originality/value
For ensuring the sustainability of built heritage, this study presents the artificial intelligence-based methodology for the development of an automatic visual inspection system. The automatic web-based visual inspection system for the built heritage has not been reported in previous studies so far.
Details