Search results
1 – 10 of over 3000The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the…
Abstract
Purpose
The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the study was the process of extending the Wikidata ontology with a new property as an example of the interplay between the social and technical components of the Wikidata infrastructure.
Design/methodology/approach
In this study, an interpretative approach to the evolution of the Wikidata ontology was used. The interpretation framework was a process-centric approach to changes in the Wikidata ontology. The extension of the Wikidata ontology with a new property was considered a socio-epistemic process where multiple agents interact for epistemic purposes. The decomposition of this process into three stages (initiation, knowledge work and closure) allowed us to reveal the role of the institutional structure of Wikidata in the evolution of its ontology.
Findings
This study has shown that the modification of the Wikidata ontology is an institutionalized process where community-accepted regulations and practices must be applied. These regulations come from the institutional structure of the Wikidata community, which sets the normative patterns for both the process and social roles and responsibilities of the involved agents.
Originality/value
The results of this study enhance our understanding of the evolution of the collaboratively developed Wikidata ontology by exposing the role of socio-epistemic processes, division of labor and normative patterns.
Details
Keywords
Peter Haase, Johanna Völker and York Sure
This paper presents a framework for ontology evolution tailored to Digital Libraries, which makes use of two different sources for change detection and propagation, the usage of…
Abstract
Purpose
This paper presents a framework for ontology evolution tailored to Digital Libraries, which makes use of two different sources for change detection and propagation, the usage of ontologies by users and the changes of available data.
Design/methodology/approach
After presenting the logical architecture of the evolution framework, we first illustrate how to deal with usage‐driven changes, that is changes derived from the actual usage of ontologies. Second, we describe the generation of data‐driven ontology changes based on the constant flow of documents coming into digital libraries.
Findings
The proposed framework for ontology ontology evolution, which is currently applied and evaluated in the case studies, significantly reduces the costs of ontology updates and improves the quality of the ontology with respect to the users' requirements.
Practical implications
The management of dynamic knowledge is crucial for many knowledge management applications. Our approach for usage‐driven and data‐driven change discovery not only assures the consistency of ontologies modeling dynamic knowledge, but also reduces the burden of manual ontology engineering.
Originality/value
This paper presents the first approach towards a common framework for ontology evolution based on usage‐driven and data‐driven change discovery.
Details
Keywords
Denny Vrandečić, Sofia Pinto, Christoph Tempich and York Sure
Aims to present the ontology engineering methodology DILIGENT, a methodology focussing on the evolution of ontologies instead of the initial design, thus recognizing that…
Abstract
Purpose
Aims to present the ontology engineering methodology DILIGENT, a methodology focussing on the evolution of ontologies instead of the initial design, thus recognizing that knowledge is a tangible and moving target.
Design/methodology/approach
First describes the methodology as a whole, then detailing one of the five main steps of DILIGENT. The second part describes case studies, either already performed or planned, and what we learned (or expect to learn) from them.
Findings
With the case studies it was discovered the strengths and weaknesses of DILIGENT. During the evolution of ontologies, arguments need to be exchanged about the suggested changes. Identifies those kind of arguments which work best for the discussion of ontology changes.
Research implications
DILIGENT recognizes ontology engineering methodologies like OnToKnowledge or Methontology as proven useful for the initial design, but expands them with its strong focus on the user‐centric further development of the ontology and the provided integration of automatic agents in the process of ontology evolution.
Practical implications
With DILIGENT the experience distilled from a number of case studies and offers the knowledge manager a methodology to work in an ever‐changing environment.
Originality/value
DILIGENT is the first methodology to put focus not on the initial development of the ontology, but on the user and his usage of the ontology, and on the changes introduced by the user. We take the user's own view seriously and enable feedback towards the evolution of the ontology, stressing the ontology's role as a shared conceptualisation.
Details
Keywords
Sidi Mohamed Benslimane, Mimoun Malki and Djelloul Bouchiha
Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, especially in commercial domains such as electronic commerce…
Abstract
Purpose
Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, especially in commercial domains such as electronic commerce. Unfortunately, usually they are implemented without producing any useful documentation for subsequent maintenance and evolution. Thereof, the maintenance of such systems becomes a challenging problem as the complexity of the web application grows. Reverse engineering has been heralded as one of the most promising technologies to support effective web application maintenance. This paper aims to present a reverse engineering approach that helps understanding existing undocumented web applications to be maintained or evolved.
Design/methodology/approach
The proposed approach provides reverse engineering rules to generate a conceptual schema from a given domain ontology by using a set of transformation rules. The reverse engineering process consists of four phases: extracting useful information; identifying a set of ontological constructs representing the concepts of interest; enriching the identified set by additional constructs; and finally deriving a conceptual schema.
Findings
The advantage of using ontology for conceptual data modeling is the reusability of domain knowledge. As a result, the conceptual data model will be made faster, easier and with fewer errors than creating it in usual way. Designers can use the extracted conceptual schema to gain a better understanding of web applications and to assist in their maintenance.
Originality/value
The strong point of this approach is that it relies on a very rich semantic reference that is domain ontology. However, it is not possible to make a straightforward transformation of all elements from a domain ontology into a conceptual data model because ontology is semantically richer than data conceptual models.
Details
Keywords
Prashant Kumar Sinha, Biswanath Dutta and Udaya Varadarajan
The current work provides a framework for the ranking of ontology development methodologies (ODMs).
Abstract
Purpose
The current work provides a framework for the ranking of ontology development methodologies (ODMs).
Design/methodology/approach
The framework is a step-by-step approach reinforced by an array of ranking features and a quantitative tool, weighted decision matrix. An extensive literature investigation revealed a set of aspects that regulate ODMs. The aspects and existing state-of-the-art estimates facilitated in extracting the features. To determine weight to each of the features, an online survey was implemented to secure evidence from the Semantic Web community. To demonstrate the framework, the authors perform a pilot study, where a collection of domain ODMs, reported in 2000–2019, is used.
Findings
State-of-the-art research revealed that ODMs have been accumulated, surveyed and assessed to prescribe the best probable ODM for ontology development. But none of the prevailing studies provide a ranking mechanism for ODMs. The recommended framework overcomes this limitation and gives a systematic and uniform way of ranking the ODMs. The pilot study yielded NeOn as the top-ranked ODM in the recent two decades.
Originality/value
There is no work in the literature that has investigated ranking the ODMs. Hence, this is a first of its kind work in the area of ODM research. The framework supports identifying the topmost ODMs from the literature possessing a substantial amount of features for ontology development. It also enables the selection of the best possible ODM for the ontology development.
Details
Keywords
Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of…
Abstract
Purpose
Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of this paper is to propose and implement a methodology for researching the sociotechnical dimensions of data-driven ontology work, and to show how applied ontologies are communicatively constituted with ethical implications.
Design/methodology/approach
The underlying idea is to use a data assemblage approach for studying ACOs and the methods they use to add semantic complexity to digital data. The author uses a mixed methods approach, providing an analysis of the widely used Basic Formal Ontology (BFO) through digital methods and visualizations, and presents historical research alongside unstructured interview data with leading experts in BFO development.
Findings
The author found that ACOs are products of communal deliberation and decision making across institutions. While ACOs are beneficial for facilitating semantic data interoperability, ACOs may produce unintended effects when semantically enhancing data about social entities and relations. ACOs can have potentially negative consequences for data subjects. Further critical work is needed for understanding how ACOs are applied in contexts like the semantic web, digital platforms, and topic domains. ACOs do not merely reflect social reality through data but are active actors in the social shaping of data.
Originality/value
The paper presents a new approach for studying ACOs, the social impact of ACO work, and describes methods that may be used to produce further applied ontology studies.
Details
Keywords
Abstract
Details
Keywords
Bernard Rothenburger and Daniel Galarreta
The aim of this paper is to provide a conceptual and methodological framework in order to prevent knowledge loss in a long duration space project.
Abstract
Purpose
The aim of this paper is to provide a conceptual and methodological framework in order to prevent knowledge loss in a long duration space project.
Design/methodology/approach
Starting from risk management, the paper considers existing factors that contribute to the success of the mission, such as dependability and safety, and then argues, using a multi‐viewpoint approach, that risk analysis produces knowledge (not simply information or data). Then, the paper describes how the filtering of risky components of a technical documentation is performed. It is based on the confrontation of the vocabulary of the different documents to an ontology of “criticality” built by the authors. The paper also describes how the knowledge evolutions are detected and how the interpretation of these evolutions is carried out.
Findings
On a conceptual side, a general model of the design process is presented based on a multi‐viewpoints approach and characterised by a value system. On the practical side, an ontology of risk, used as a reference system in order to compare knowledge at different stages of a project, is described.
Research limitations/implications
Some difficulty arises when a very huge documentation is addressed. Among all evolution clues a lot of them could be well‐known by everybody or could be of little importance.
Practical implications
The paper intends to have a preventive strategy for knowledge loss in a long duration project. Reaching the ultimate stage of a mission, project management should be able to identify the main knowledge differences between technical culture of new incomers and the one of the early designer that can be found in the project documents.
Originality/value
The paper carries a multi‐discipline approach, putting together different domains: space activity, statistic specialist, knowledge managements, and linguistics.
Details
Keywords
José L. Navarro‐Galindo and José Samos
Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue…
Abstract
Purpose
Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue, as this enables the benefits of the semantic web to be extended. The purpose of this paper is to present a FLERSA (Flexible Range Semantic Annotation) for flexible range semantic annotation.
Design/methodology/approach
A FLERSA is presented as a user‐centred annotation tool for Web content expressed in natural language. The tool has been built in order to illustrate how a WCMS called Joomla! can be converted into its semantic equivalent.
Findings
The development of the tool shows that it is possible to build a semantic WCMS through a combination of semantic components and other resources such as ontologies and emergence technologies, including XML, RDF, RDFa and OWL.
Practical implications
The paper provides a starting‐point for further research in which the principles and techniques of the FLERSA tool can be applied to any WCMS.
Originality/value
The tool allows both manual and automatic semantic annotations, as well as providing enhanced search capabilities. For manual annotation, a new flexible range markup technique is used, based on the RDFa standard, to support the evolution of annotated Web documents more effectively than XPointer. For automatic annotation, a hybrid approach based on machine learning techniques (Vector‐Space Model + n‐grams) is used to determine the concepts that the content of a Web document deals with (from an ontology which provides a taxonomy), based on previous annotations that are used as a training corpus.
Details
Keywords
Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao
Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…
Abstract
Purpose
Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.
Design/methodology/approach
The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.
Findings
The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.
Originality/value
The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.
Details