Search results

1 – 10 of over 1000
Article
Publication date: 31 December 2015

Tayybah Kiren and Muhammad Shoaib

Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many…

Abstract

Purpose

Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched.

Design/methodology/approach

Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson’s correlation coefficient and IR measures precision, recall and F-measure.

Findings

Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed.

Originality/value

On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.

Details

Aslib Journal of Information Management, vol. 68 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 15 May 2023

Dongsheng Li and Jun Li

Minimizing the impact on the surrounding environment and maximizing the use of production raw materials while ensuring that the relevant processes and services can be delivered…

Abstract

Purpose

Minimizing the impact on the surrounding environment and maximizing the use of production raw materials while ensuring that the relevant processes and services can be delivered within the specified time are the contents of enterprise supply chain management in the green financial system.

Design/methodology/approach

With the continuous development of China's economy and the continuous deepening of the concept of sustainable development, how to further upgrade the enterprise supply chain management is an urgent need to solve. How to maximize the utilization of resources in the supply chain needs to be realized from the whole process of raw material purchase, transportation and processing.

Findings

It was proved that digital twin technology had a partial intermediary role in the role of supply chain big data analysis capability on corporate finance, market, operation and other performance.

Originality/value

This paper focused on describing how digital twin technology could be applied to big data analysis of enterprise supply chain under the green financial system and proved its usability through experiments. The experimental results showed that the indirect effect of the path big data analysis capability digital twin technology enterprise financial performance was 0.378. The indirect effect of the path big data analysis capability digital twin technology enterprise market performance was 0.341. The indirect effect of the path big data analysis capability digital twin technology enterprise operational performance was 0.374.

Details

Kybernetes, vol. 53 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 October 2009

Ching‐Chieh Kiu and Chien‐Sing Lee

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve

Abstract

Purpose

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities among distributed data sources in organizational memory and subsequently generate a merged ontology to facilitate resource retrieval from distributed resources for organizational decision making.

Design/methodology/approach

The OntoDNA employs unsupervised data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities to integrate distributed data sources in organizational memory. Unsupervised methods are needed as an alternative in the absence of prior knowledge for managing this knowledge. Given two ontologies that are to be merged as the input, the ontologies' conceptual pattern is discovered using FCA. Then, string normalizations are applied to transform their attributes in the formal context prior to lexical similarity mapping. Mapping rules are applied to reconcile the attributes. Subsequently, SOM and K‐means are applied for semantic similarity mapping based on the conceptual pattern discovered in the formal context to reduce the problem size of the SOM clusters as validated by the Davies‐Bouldin index. The mapping rules are then applied to discover semantic similarity between ontological concepts in the clusters and the ontological concepts of the target ontology are updated to the source ontology based on the merging rules. Merged ontology in a concept lattice is formed.

Findings

In experimental comparisons between PROMPT and OntoDNA ontology mapping and merging tool based on precision, recall and f‐measure, average mapping results for OntoDNA is 95.97 percent compared to PROMPT's 67.24 percent. In terms of recall, OntoDNA outperforms PROMPT on all the paired ontology except for one paired ontology. For the merging of one paired ontology, PROMPT fails to identify the mapping elements. OntoDNA significantly outperforms PROMPT due to the utilization of FCA in the OntoDNA to capture attributes and the inherent structural relationships among concepts. Better performance in OntoDNA is due to the following reasons. First, semantic problems such as synonymy and polysemy are resolved prior to contextual clustering. Second, unsupervised data mining techniques (SOM and K‐means) have reduced problem size. Third, string matching performs better than PROMPT's linguistic‐similarity matching in addressing semantic heterogeneity, in context it also contributes to the OntoDNA results. String matching resolves concept names based on similarity between concept names in each cluster for ontology mapping. Linguistic‐similarity matching resolves concept names based on concept‐representation structure and relations between concepts for ontology mapping.

Originality/value

The OntoDNA automates ontology mapping and merging without the need of any prior knowledge to generate a merged ontology. String matching is shown to perform better than linguistic‐similarity matching in resolving concept names. The OntoDNA will be valuable for organizations interested in merging ontologies from distributed or different organizational memories. For example, an organization might want to merge their organization‐specific ontologies with community standard ontologies.

Details

VINE, vol. 39 no. 4
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 21 March 2008

Jürgen Krause

To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common…

1433

Abstract

Purpose

To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web.

Design/methodology/approach

The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing.

Findings

The points criticised by the semantic web and ontology approaches are the same as those of the DL “Shell model approach” from the mid‐1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the “invisible web”, necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis.

Practical implications

A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that – although most of the semantic web standards are not technologically refined for commercial applications at present – all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web.

Originality/value

A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.

Details

Library Review, vol. 57 no. 3
Type: Research Article
ISSN: 0024-2535

Keywords

Article
Publication date: 6 April 2010

James Z. Wang, Farha Ali and Pradip K. Srimani

With the recent availability of large number of bioinformatics data sources, query from such databases and rigorous annotation of experimental results often use semantic…

1687

Abstract

Purpose

With the recent availability of large number of bioinformatics data sources, query from such databases and rigorous annotation of experimental results often use semantic frameworks in the form of an ontology. With the growing access to heterogeneous and independent data repositories, determining the semantic similarity or difference of two ontologies is critical in information retrieval, information integration and semantic web services. The purpose of this paper is to propose a new sense refinement algorithm to construct a refined sense set (RSS) for an ontology so that the senses (synonym words) in this refined sense set represent the semantic meanings of the terms used by this ontology.

Design/methodology/approach

A new concept of a semantic set is introduced that combines the refined sense set of ontology with the relationship edges connecting the terms in this ontology to represent the semantics of this ontology. With the semantic sets, measuring the semantic similarity or difference of two ontologies is simplified as comparing the commonality or difference of two sets.

Findings

The experimental studies show that the proposed method of measuring the semantic similarity or difference of two ontologies is efficient and accurate; comparisons with existing methods show the efficacy of using the new method.

Originality/value

The concepts introduced in this paper will improve automation of bioinformatics databases to serve queries based on heterogeneous ontologies.

Details

International Journal of Pervasive Computing and Communications, vol. 6 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 July 2014

Janina Fengel

The purpose of this paper is to propose a solution for automating the task of matching business process models and search for correspondences with regard to the model semantics…

Abstract

Purpose

The purpose of this paper is to propose a solution for automating the task of matching business process models and search for correspondences with regard to the model semantics, thus improving the efficiency of such works.

Design/methodology/approach

A method is proposed based on combining several semantic technologies. The research follows a design-science-oriented approach in that a method together with its supporting artifacts has been engineered. It application allows for reusing legacy models and automatedly determining semantic similarity.

Findings

The method has been applied and the first findings suggest the effectiveness of the approach. The results of applying the method show its feasibility and significance. The suggested heuristic computing of semantic correspondences between semantically heterogeneous business process models is flexible and can support domain users.

Research limitations/implications

Even though a solution can be offered that is directly usable, so far the full complexity of the natural language as given in model element labels is not yet completely resolvable. Here further research could contribute to the potential optimizations and refinement of automatic matching and linguistic procedures. However, an open research question could be solved.

Practical implications

The method presented is aimed at adding to the methods in the field of business process management and could extend the possibilities of automating support for business analysis.

Originality/value

The suggested combination of semantic technologies is innovative and addresses the aspect of semantic heterogeneity in a holistic, which is novel to the field.

Article
Publication date: 10 December 2018

Bruno C.N. Oliveira, Alexis Huf, Ivan Luiz Salvadori and Frank Siqueira

This paper describes a software architecture that automatically adds semantic capabilities to data services. The proposed architecture, called OntoGenesis, is able to semantically…

Abstract

Purpose

This paper describes a software architecture that automatically adds semantic capabilities to data services. The proposed architecture, called OntoGenesis, is able to semantically enrich data services, so that they can dynamically provide both semantic descriptions and data representations.

Design/methodology/approach

The enrichment approach is designed to intercept the requests from data services. Therefore, a domain ontology is constructed and evolved in accordance with the syntactic representations provided by such services in order to define the data concepts. In addition, a property matching mechanism is proposed to exploit the potential data intersection observed in data service representations and external data sources so as to enhance the domain ontology with new equivalences triples. Finally, the enrichment approach is capable of deriving on demand a semantic description and data representations that link to the domain ontology concepts.

Findings

Experiments were performed using real-world datasets, such as DBpedia, GeoNames as well as open government data. The obtained results show the applicability of the proposed architecture and that it can boost the development of semantic data services. Moreover, the matching approach achieved better performance when compared with other existing approaches found in the literature.

Research limitations/implications

This work only considers services designed as data providers, i.e., services that provide an interface for accessing data sources. In addition, our approach assumes that both data services and external sources – used to enhance the domain ontology – have some potential of data intersection. Such assumption only requires that services and external sources share particular property values.

Originality/value

Unlike most of the approaches found in the literature, the architecture proposed in this paper is meant to semantically enrich data services in such way that human intervention is minimal. Furthermore, an automata-based index is also presented as a novel method that significantly improves the performance of the property matching mechanism.

Details

International Journal of Web Information Systems, vol. 15 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 30 July 2019

Andrew Iliadis

Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of…

Abstract

Purpose

Applied computational ontologies (ACOs) are increasingly used in data science domains to produce semantic enhancement and interoperability among divergent data. The purpose of this paper is to propose and implement a methodology for researching the sociotechnical dimensions of data-driven ontology work, and to show how applied ontologies are communicatively constituted with ethical implications.

Design/methodology/approach

The underlying idea is to use a data assemblage approach for studying ACOs and the methods they use to add semantic complexity to digital data. The author uses a mixed methods approach, providing an analysis of the widely used Basic Formal Ontology (BFO) through digital methods and visualizations, and presents historical research alongside unstructured interview data with leading experts in BFO development.

Findings

The author found that ACOs are products of communal deliberation and decision making across institutions. While ACOs are beneficial for facilitating semantic data interoperability, ACOs may produce unintended effects when semantically enhancing data about social entities and relations. ACOs can have potentially negative consequences for data subjects. Further critical work is needed for understanding how ACOs are applied in contexts like the semantic web, digital platforms, and topic domains. ACOs do not merely reflect social reality through data but are active actors in the social shaping of data.

Originality/value

The paper presents a new approach for studying ACOs, the social impact of ACO work, and describes methods that may be used to produce further applied ontology studies.

Details

Online Information Review, vol. 43 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 8 May 2017

Amed Leiva-Mederos, Jose A. Senso, Yusniel Hidalgo-Delgado and Pedro Hipola

Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be…

1130

Abstract

Purpose

Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources.

Design/methodology/approach

After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the “dimensions” included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file.

Findings

The paper also includes an evaluation based on the comparison – by means of calculations of recall and precision – of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information.

Originality/value

The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.

Article
Publication date: 1 October 2005

Tim Finin, Li Ding, Lina Zhou and Anupam Joshi

Aims to investigate the way that the semantic web is being used to represent and process social network information.

5610

Abstract

Purpose

Aims to investigate the way that the semantic web is being used to represent and process social network information.

Design/methodology/approach

The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the “Friend of a Friend” (FOAF) ontology. The datasets were analyzed to discover how FOAF is being used and investigate the kinds of social networks found on the web.

Findings

The FOAF ontology is the most widely used domain ontology on the semantic web. People are using it in an open and extensible manner by defining new classes and properties to use with FOAF.

Research limitations/implications

RDF data was only obtained from public RDF documents published on the web. Some RDF FOAF data may be unavailable because it is behind firewalls, on intranets or stored in private databases. The ways in which the semantic web languages RDF and OWL are being used (and abused) are dynamic and still evolving. A similar study done two years from now may show very different results.

Originality/value

This paper describes how social networks are being encoded and used on the world wide web in the form of RDF documents and the FOAF ontology. It provides data on large social networks as well as insights on how the semantic web is being used in 2005.

Details

The Learning Organization, vol. 12 no. 5
Type: Research Article
ISSN: 0969-6474

Keywords

1 – 10 of over 1000