Search results

1 – 10 of over 4000
Article
Publication date: 11 July 2019

M. Priya and Aswani Kumar Ch.

The purpose of this paper is to merge the ontologies that remove the redundancy and improve the storage efficiency. The count of ontologies developed in the past few eras is…

Abstract

Purpose

The purpose of this paper is to merge the ontologies that remove the redundancy and improve the storage efficiency. The count of ontologies developed in the past few eras is noticeably very high. With the availability of these ontologies, the needed information can be smoothly attained, but the presence of comparably varied ontologies nurtures the dispute of rework and merging of data. The assessment of the existing ontologies exposes the existence of the superfluous information; hence, ontology merging is the only solution. The existing ontology merging methods focus only on highly relevant classes and instances, whereas somewhat relevant classes and instances have been simply dropped. Those somewhat relevant classes and instances may also be useful or relevant to the given domain. In this paper, we propose a new method called hybrid semantic similarity measure (HSSM)-based ontology merging using formal concept analysis (FCA) and semantic similarity measure.

Design/methodology/approach

The HSSM categorizes the relevancy into three classes, namely highly relevant, moderate relevant and least relevant classes and instances. To achieve high efficiency in merging, HSSM performs both FCA part and the semantic similarity part.

Findings

The experimental results proved that the HSSM produced better results compared with existing algorithms in terms of similarity distance and time. An inconsistency check can also be done for the dissimilar classes and instances within an ontology. The output ontology will have set of highly relevant and moderate classes and instances as well as few least relevant classes and instances that will eventually lead to exhaustive ontology for the particular domain.

Practical implications

In this paper, a HSSM method is proposed and used to merge the academic social network ontologies; this is observed to be an extremely powerful methodology compared with other former studies. This HSSM approach can be applied for various domain ontologies and it may deliver a novel vision to the researchers.

Originality/value

The HSSM is not applied for merging the ontologies in any former studies up to the knowledge of authors.

Details

Library Hi Tech, vol. 38 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 23 October 2009

Ching‐Chieh Kiu and Chien‐Sing Lee

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve

Abstract

Purpose

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities among distributed data sources in organizational memory and subsequently generate a merged ontology to facilitate resource retrieval from distributed resources for organizational decision making.

Design/methodology/approach

The OntoDNA employs unsupervised data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities to integrate distributed data sources in organizational memory. Unsupervised methods are needed as an alternative in the absence of prior knowledge for managing this knowledge. Given two ontologies that are to be merged as the input, the ontologies' conceptual pattern is discovered using FCA. Then, string normalizations are applied to transform their attributes in the formal context prior to lexical similarity mapping. Mapping rules are applied to reconcile the attributes. Subsequently, SOM and K‐means are applied for semantic similarity mapping based on the conceptual pattern discovered in the formal context to reduce the problem size of the SOM clusters as validated by the Davies‐Bouldin index. The mapping rules are then applied to discover semantic similarity between ontological concepts in the clusters and the ontological concepts of the target ontology are updated to the source ontology based on the merging rules. Merged ontology in a concept lattice is formed.

Findings

In experimental comparisons between PROMPT and OntoDNA ontology mapping and merging tool based on precision, recall and f‐measure, average mapping results for OntoDNA is 95.97 percent compared to PROMPT's 67.24 percent. In terms of recall, OntoDNA outperforms PROMPT on all the paired ontology except for one paired ontology. For the merging of one paired ontology, PROMPT fails to identify the mapping elements. OntoDNA significantly outperforms PROMPT due to the utilization of FCA in the OntoDNA to capture attributes and the inherent structural relationships among concepts. Better performance in OntoDNA is due to the following reasons. First, semantic problems such as synonymy and polysemy are resolved prior to contextual clustering. Second, unsupervised data mining techniques (SOM and K‐means) have reduced problem size. Third, string matching performs better than PROMPT's linguistic‐similarity matching in addressing semantic heterogeneity, in context it also contributes to the OntoDNA results. String matching resolves concept names based on similarity between concept names in each cluster for ontology mapping. Linguistic‐similarity matching resolves concept names based on concept‐representation structure and relations between concepts for ontology mapping.

Originality/value

The OntoDNA automates ontology mapping and merging without the need of any prior knowledge to generate a merged ontology. String matching is shown to perform better than linguistic‐similarity matching in resolving concept names. The OntoDNA will be valuable for organizations interested in merging ontologies from distributed or different organizational memories. For example, an organization might want to merge their organization‐specific ontologies with community standard ontologies.

Details

VINE, vol. 39 no. 4
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 16 October 2009

Junwu Zhu, Jiandong Wang and Bin Li

The purpose of this paper is to integrate distributed ontologies on the web system and clarify the structure of the integrated one.

295

Abstract

Purpose

The purpose of this paper is to integrate distributed ontologies on the web system and clarify the structure of the integrated one.

Design/methodology/approach

A formal method based on concept lattices is introduced as a mechanism to form more general semantic level. By checking the extension and the intension of concept, this method extracts the concept pairs satisfying inclusion relations from descartes' set of concepts in distributed ontologies first, and then constructs a concept lattice according to these concept pairs. An algorithm to reduce redundant relations is also proposed to clarify the structure of integrated ontology.

Findings

The experiments demonstrate the effectiveness of the proposed method to reduce redundant relations, and the Nir‐to‐Ncr ratio inclines to 1.05 from 3.13.

Research limitations/implications

Instances of certain concept are not given completely on the web, so it is difficult to check extension of different concepts.

Practical implications

A very useful method of integrating distributed ontologies on the web.

Originality/value

Compared with existing methods, this formal method can be performed by program automatically without any human intervening, and can extract the inclusion relations between concepts from distributed ontologies completely.

Details

Kybernetes, vol. 38 no. 10
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 13 May 2024

Marcin Roszkowski

The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the…

Abstract

Purpose

The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the study was the process of extending the Wikidata ontology with a new property as an example of the interplay between the social and technical components of the Wikidata infrastructure.

Design/methodology/approach

In this study, an interpretative approach to the evolution of the Wikidata ontology was used. The interpretation framework was a process-centric approach to changes in the Wikidata ontology. The extension of the Wikidata ontology with a new property was considered a socio-epistemic process where multiple agents interact for epistemic purposes. The decomposition of this process into three stages (initiation, knowledge work and closure) allowed us to reveal the role of the institutional structure of Wikidata in the evolution of its ontology.

Findings

This study has shown that the modification of the Wikidata ontology is an institutionalized process where community-accepted regulations and practices must be applied. These regulations come from the institutional structure of the Wikidata community, which sets the normative patterns for both the process and social roles and responsibilities of the involved agents.

Originality/value

The results of this study enhance our understanding of the evolution of the collaboratively developed Wikidata ontology by exposing the role of socio-epistemic processes, division of labor and normative patterns.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 12 February 2018

Parvin Hashemi, Ameneh Khadivar and Mehdi Shamizanjani

The purpose of this paper is to develop a new ontology for knowledge management (KM) technologies, determining the relationships between these technologies and classification of…

1070

Abstract

Purpose

The purpose of this paper is to develop a new ontology for knowledge management (KM) technologies, determining the relationships between these technologies and classification of them.

Design/methodology/approach

The study applies NOY methodology – named after Natalya F. Noy who initiated this methodology. Protégé software and ontology web language are used for building the ontology. The presented ontology is evaluated with abbreviation and consistency criteria and knowledge retrieval of KM technologies by experts.

Findings

All the main concepts in the scope of KM technologies are extracted from existing literature. There are 241 words, 49 out of them are domain concepts, eight terms are about taxonomic and non-taxonomic relations, one term relates to data property and 183 terms are instances. These terms are used to develop KM technologies’ ontology based on three factors: facilitating KM processes, supporting KM strategies and the position of technology in the KM technology stage model. The presented ontology is created a common understanding in the field of KM technologies.

Research limitations/implications

Lack of specific documentary about logic behind decision making and prioritizing criteria in choosing KM technologies.

Practical implications

Uploading the presented ontology in the web environment provides a platform for knowledge sharing between experts from around the world. In addition, it helps to decide on the choice of KM technologies based on KM processes and KM strategy.

Originality/value

Among the many categories of KM technologies in literature, there is no classifying according to several criteria simultaneously. This paper contributes to filling this gap and considers KM processes, KM strategy and stages of growth for KM technologies simultaneously to choice the KM technologies and also there exists no formal ontology regarding KM technologies. This study has tried to propose a formal KM technologies’ ontology.

Article
Publication date: 3 June 2024

Mariam Ben Hassen, Mohamed Turki and Faiez Gargouri

This paper introduces the problematic of the SBP modeling. Our objective is to provide a conceptual analysis related to the concept of SBP. This facilitates, on the one hand…

Abstract

Purpose

This paper introduces the problematic of the SBP modeling. Our objective is to provide a conceptual analysis related to the concept of SBP. This facilitates, on the one hand, easier understanding by business analysts and end-users, and one the other hand, the integration of the new specific concepts relating to the SBP/BPM-KM domains into the BPMN meta-model (OMG, 2013).

Design/methodology/approach

We propose a rigorous characterization of SBP (Sensitive Business Processes) (which distinguishes it from classic, structured and conventional BPs). Secondly, we propose a multidimensional classification of SBP modeling aspects and requirements to develop expressive, comprehensive and rigorous models. Besides, we present an in-depth study of the different modeling approaches and languages, in order to analyze their expressiveness and their abil-ity to perfectly and explicitly represent the new specific requirements of SBP modeling. In this study, we choose the better one positioned nowadays, BPMN 2.0, as the best suited standard for SBP representation. Finally, we propose a semantically rich conceptualization of a SBP organized in core ontology.

Findings

We defined a rigorous conceptual specification for this type of BP, organized in a multi-perspective formal ontology, the Core Ontology of Sensitive Business Processes (COSBP). This reference ontology will be used to define a generic BP meta-model (BPM4KI) further specifying SBPs. The objective is to obtain an enriched consensus modeling covering all generic concepts, semantic relationships and properties needed for the exploitation of SBPs, known as core modeling.

Originality/value

This paper introduces the problem of conceptual analysis of SBPs for (crucial) knowledge identification and management. These processes are highly complex and knowledge-intensive. The originality of this contribution lies in the multi-dimensional approach we have adopted for SBP modeling as well as the definition of a Core Ontology of Sensitive Business Processes (COSBP) which is very useful to extend the BPMN notation for knowledge management.

Details

Business Process Management Journal, vol. 30 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 30 July 2019

Andrew Iliadis

Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of…

Abstract

Purpose

Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of this paper is to propose and implement a methodology for researching the sociotechnical dimensions of data-driven ontology work, and to show how applied ontologies are communicatively constituted with ethical implications.

Design/methodology/approach

The underlying idea is to use a data assemblage approach for studying ACOs and the methods they use to add semantic complexity to digital data. The author uses a mixed methods approach, providing an analysis of the widely used Basic Formal Ontology (BFO) through digital methods and visualizations, and presents historical research alongside unstructured interview data with leading experts in BFO development.

Findings

The author found that ACOs are products of communal deliberation and decision making across institutions. While ACOs are beneficial for facilitating semantic data interoperability, ACOs may produce unintended effects when semantically enhancing data about social entities and relations. ACOs can have potentially negative consequences for data subjects. Further critical work is needed for understanding how ACOs are applied in contexts like the semantic web, digital platforms, and topic domains. ACOs do not merely reflect social reality through data but are active actors in the social shaping of data.

Originality/value

The paper presents a new approach for studying ACOs, the social impact of ACO work, and describes methods that may be used to produce further applied ontology studies.

Details

Online Information Review, vol. 43 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 16 September 2021

Prashant Kumar Sinha, Sagar Bhimrao Gajbe, Sourav Debnath, Subhranshubhusan Sahoo, Kanu Chakraborty and Shiva Shankar Mahato

This work provides a generic review of the existing data mining ontologies (DMOs) and also provides a base platform for ontology developers and researchers for gauging the…

Abstract

Purpose

This work provides a generic review of the existing data mining ontologies (DMOs) and also provides a base platform for ontology developers and researchers for gauging the ontologies for satisfactory coverage and usage.

Design/methodology/approach

The study uses a systematic literature review approach to identify 35 DMOs in the domain between the years 2003 and 2021. Various parameters, like purpose, design methodology, operations used, language representation, etc. are available in the literature to review ontologies. Accompanying the existing parameters, a few parameters, like semantic reasoner used, knowledge representation formalism was added and a list of 20 parameters was prepared. It was then segregated into two groups as generic parameters and core parameters to review DMOs.

Findings

It was observed that among the 35 papers under the study, 26 papers were published between the years 2006 and 2016. Larisa Soldatova, Saso Dzeroski and Pance Panov were the most productive authors of these DMO-related publications. The ontological review indicated that most of the DMOs were domain and task ontologies. Majority of ontologies were formal, modular and represented using web ontology language (OWL). The data revealed that Ontology development 101, METHONTOLOGY was the preferred design methodology, and application-based approaches were preferred for evaluation. It was also observed that around eight ontologies were accessible, and among them, three were available in ontology libraries as well. The most reused ontologies were OntoDM, BFO, OBO-RO, OBI, IAO, OntoDT, SWO and DMOP. The most preferred ontology editor was Protégé, whereas the most used semantic reasoner was Pellet. Even ontology metrics for 16 DMOs were also available.

Originality/value

This paper carries out a basic level review of DMOs employing a parametric approach, which makes this study the first of a kind for the review of DMOs.

Details

Data Technologies and Applications, vol. 56 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 October 2018

Qing Zou and Eun G. Park

This study aims to explore a way of representing historical collections by examining the features of an event in historical documents and building an event-based ontology model.

Abstract

Purpose

This study aims to explore a way of representing historical collections by examining the features of an event in historical documents and building an event-based ontology model.

Design/methodology/approach

To align with a domain-specific and upper ontology, the Basic Formal Ontology (BFO) model is adopted. Based on BFO, an event-based ontology for historical description (EOHD) is designed. To define events, event-related vocabularies are taken from the Library of Congress’ event types (2012). The three types of history and six kinds of changes are defined.

Findings

The EOHD model demonstrates how to apply the event ontology to biographical sketches of a creator history to link event types.

Research limitations/implications

The EOHD model has great potential to be further expanded to specific events and entities through different types of history in a full set of historical documents.

Originality/value

The EOHD provides a framework for modeling and semantically reforming the relationships of historical documents, which can make historical collections more explicitly connected in Web environments.

Details

Digital Library Perspectives, vol. 34 no. 4
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 9 February 2015

Biswanath Dutta, USASHI CHATTERJEE and Devika P. Madalli

This paper aims to propose a brand new ontology development methodology, called Yet Another Methodology for Ontology (YAMO) and demonstrate, step by step, the building of a…

1200

Abstract

Purpose

This paper aims to propose a brand new ontology development methodology, called Yet Another Methodology for Ontology (YAMO) and demonstrate, step by step, the building of a formally defined large-scale faceted ontology for food.

Design/methodology/approach

YAMO is motivated by facet analysis and an analytico-synthetic classification approach. The approach ensures quality of the system precisely; it makes the system flexible, hospitable, extensible, sturdy, dense and complete. YAMO consists of two-way approaches: top-down and bottom-up. Based on YAMO, domain food, formally defined as large-scale ontology, is designed. To design the ontology and to define the scope and boundary of the domain, a group of people were interviewed to get a practical overview, which provided more insight to the theoretical understanding of the domain.

Findings

The result obtained from evaluating the ontology is a very impressive one. Based on the study, it was found that 94 per cent of the user’s queries were successfully met. This shows the efficiency and effectiveness of the YAMO methodology. An evaluator opined that the ontology is very deep and exhaustive.

Practical implications

The authors envision that the current work will have great implications on ontology developers and practitioners. YAMO will allow ontologists to construct a very deep, high-quality and large-scale ontology.

Originality/value

This paper illustrates a brand new ontology development methodology and demonstrates how the methodology can be applied to build a large-scale high-quality domain ontology.

Details

Journal of Knowledge Management, vol. 19 no. 1
Type: Research Article
ISSN: 1367-3270

Keywords

1 – 10 of over 4000