Search results

1 – 10 of over 5000
Article
Publication date: 15 June 2015

Ya-Ning Chen

The purpose of this paper is to propose a Resource Description Framework (RDF)-based approach to transform metadata crosswalking from equivalent lexical element mapping into…

1569

Abstract

Purpose

The purpose of this paper is to propose a Resource Description Framework (RDF)-based approach to transform metadata crosswalking from equivalent lexical element mapping into semantic mapping with various contextual relationships. RDF is used as a crosswalk model to represent the contextual relationships implicitly embedded between described objects and their elements, including semantic, hierarchical, granular, syntactic and multiple object relationships to achieve semantic metadata interoperability at the data element level.

Design/methodology/approach

This paper uses RDF to translate metadata elements and their relationships into semantic expressions, and also as a data model to define the syntax for element mapping. The feasibility of the proposed approach for semantic metadata crosswalking is examined based on two use cases – the Archives of Navy Ships Project and the Digital Artifacts Project of National Palace Museum in Taipei – both from the Taiwan e-Learning and Digital Archives Program.

Findings

As the model developed is based on RDF-based expressions, unsolved issues related to crosswalking, such as sets of shared terms, and contextual relationships embedded between described objects and their metadata elements could be manifested into a semantic representation. Corresponding element mapping and mapping rules can be specified without ambiguity to achieve semantic metadata interoperability.

Research limitations/implications

Five steps were developed to clarify the details of the RDF-based crosswalk. The RDF-based expressions can also serve as a basis from which to develop linked data and Semantic Web applications. More use cases including biodiversity artifacts of natural history museums and literary works of libraries, and conditions, constraints and cardinality of metadata data elements will be required to make revisions to fine tune the proposed RDF-based metadata crosswalk.

Originality/value

In addition to reviving contextual relationships embedded between described objects and their metadata elements, nine types of mapping rules were developed to achieve a semantic metadata crosswalk which will facilitate the design of related mapping software. Furthermore, the proposed approach complements existing crosswalking documents provided by authoritative organizations, and enriches mapping language developed by the CIDOC community.

Details

Library Hi Tech, vol. 33 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 19 June 2017

Janusz Marian Bedkowski and Timo Röhling

This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser…

Abstract

Purpose

This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser semantic mobile mapping and particle filter localization dedicated for robot patrolling urban sites is elaborated with a focus on parallel computing application for semantic mapping and particle filter localization. The real robotic application of patrolling urban sites is the goal; thus, it has been shown that crucial robotic components have reach high Technology Readiness Level (TRL).

Design/methodology/approach

Three different robotic platforms equipped with different 3D laser measurement system were compared. Each system provides different data according to the measured distance, density of points and noise; thus, the influence of data into final semantic maps has been compared. The realistic problem is to use these semantic maps for robot localization; thus, the influence of different maps into particle filter localization has been elaborated. A new approach has been proposed for particle filter localization based on 3D semantic information, and thus, the behavior of particle filter in different realistic conditions has been elaborated. The process of using proposed robotic components for patrolling urban site, such as the robot checking geometrical changes of the environment, has been detailed.

Findings

The focus on real-world mobile systems requires different points of view for scientific work. This study is focused on robust and reliable solutions that could be integrated with real applications. Thus, new parallel computing approach for semantic mapping and particle filter localization has been proposed. Based on the literature, semantic 3D particle filter localization has not yet been elaborated; thus, innovative solutions for solving this issue have been proposed. Recently, a semantic mapping framework that was already published was developed. For this reason, this study claimed that the authors’ applied studies during real-world trials with such mapping system are added value relevant for this special issue.

Research limitations/implications

The main problem is the compromise between computer power and energy consumed by heavy calculations, thus our main focus is to use modern GPGPU, NVIDIA PASCAL parallel processor architecture. Recent advances in GPGPUs shows great potency for mobile robotic applications, thus this study is focused on increasing mapping and localization capabilities by improving the algorithms. Current limitation is related with the number of particles processed by a single processor, and thus achieved performance of 500 particles in real-time is the current limitation. The implication is that multi-GPU architectures for increasing the number of processed particle can be used. Thus, further studies are required.

Practical implications

The research focus is related to real-world mobile systems; thus, practical aspects of the work are crucial. The main practical application is semantic mapping that could be used for many robotic applications. The authors claim that their particle filter localization is ready to integrate with real robotic platforms using modern 3D laser measurement system. For this reason, the authors claim that their system can improve existing autonomous robotic platforms. The proposed components can be used for detection of geometrical changes in the scene; thus, many practical functionalities can be applied such as: detection of cars, detection of opened/closed gate, etc. […] These functionalities are crucial elements of the safe and security domain.

Social implications

Improvement of safe and security domain is a crucial aspect of modern society. Protecting critical infrastructure plays an important role, thus introducing autonomous mobile platforms capable of supporting human operators of safe and security systems could have a positive impact if viewed from many points of view.

Originality/value

This study elaborates the novel approach of particle filter localization based on 3D data and semantic mapping. This original work could have a great impact on the mobile robotics domain, and thus, this study claims that many algorithmic and implementation issues were solved assuming real-task experiments. The originality of this work is influenced by the use of modern advanced robotic systems being a relevant set of technologies for proper evaluation of the proposed approach. Such a combination of experimental hardware and original algorithms and implementation is definitely an added value.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 November 2023

Julaine Clunis

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…

Abstract

Purpose

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.

Design/methodology/approach

A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.

Findings

The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.

Originality/value

The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 23 October 2009

Ching‐Chieh Kiu and Chien‐Sing Lee

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve

Abstract

Purpose

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities among distributed data sources in organizational memory and subsequently generate a merged ontology to facilitate resource retrieval from distributed resources for organizational decision making.

Design/methodology/approach

The OntoDNA employs unsupervised data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities to integrate distributed data sources in organizational memory. Unsupervised methods are needed as an alternative in the absence of prior knowledge for managing this knowledge. Given two ontologies that are to be merged as the input, the ontologies' conceptual pattern is discovered using FCA. Then, string normalizations are applied to transform their attributes in the formal context prior to lexical similarity mapping. Mapping rules are applied to reconcile the attributes. Subsequently, SOM and K‐means are applied for semantic similarity mapping based on the conceptual pattern discovered in the formal context to reduce the problem size of the SOM clusters as validated by the Davies‐Bouldin index. The mapping rules are then applied to discover semantic similarity between ontological concepts in the clusters and the ontological concepts of the target ontology are updated to the source ontology based on the merging rules. Merged ontology in a concept lattice is formed.

Findings

In experimental comparisons between PROMPT and OntoDNA ontology mapping and merging tool based on precision, recall and f‐measure, average mapping results for OntoDNA is 95.97 percent compared to PROMPT's 67.24 percent. In terms of recall, OntoDNA outperforms PROMPT on all the paired ontology except for one paired ontology. For the merging of one paired ontology, PROMPT fails to identify the mapping elements. OntoDNA significantly outperforms PROMPT due to the utilization of FCA in the OntoDNA to capture attributes and the inherent structural relationships among concepts. Better performance in OntoDNA is due to the following reasons. First, semantic problems such as synonymy and polysemy are resolved prior to contextual clustering. Second, unsupervised data mining techniques (SOM and K‐means) have reduced problem size. Third, string matching performs better than PROMPT's linguistic‐similarity matching in addressing semantic heterogeneity, in context it also contributes to the OntoDNA results. String matching resolves concept names based on similarity between concept names in each cluster for ontology mapping. Linguistic‐similarity matching resolves concept names based on concept‐representation structure and relations between concepts for ontology mapping.

Originality/value

The OntoDNA automates ontology mapping and merging without the need of any prior knowledge to generate a merged ontology. String matching is shown to perform better than linguistic‐similarity matching in resolving concept names. The OntoDNA will be valuable for organizations interested in merging ontologies from distributed or different organizational memories. For example, an organization might want to merge their organization‐specific ontologies with community standard ontologies.

Details

VINE, vol. 39 no. 4
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 21 June 2011

Yi‐ling Lin, Peter Brusilovsky and Daqing He

The goal of the research is to explore whether the use of higher‐level semantic features can help us to build better self‐organising map (SOM) representation as measured from a…

Abstract

Purpose

The goal of the research is to explore whether the use of higher‐level semantic features can help us to build better self‐organising map (SOM) representation as measured from a human‐centred perspective. The authors also explore an automatic evaluation method that utilises human expert knowledge encapsulated in the structure of traditional textbooks to determine map representation quality.

Design/methodology/approach

Two types of document representations involving semantic features have been explored – i.e. using only one individual semantic feature, and mixing a semantic feature with keywords. Experiments were conducted to investigate the impact of semantic representation quality on the map. The experiments were performed on data collections from a single book corpus and a multiple book corpus.

Findings

Combining keywords with certain semantic features achieves significant improvement of representation quality over the keywords‐only approach in a relatively homogeneous single book corpus. Changing the ratios in combining different features also affects the performance. While semantic mixtures can work well in a single book corpus, they lose their advantages over keywords in the multiple book corpus. This raises a concern about whether the semantic representations in the multiple book corpus are homogeneous and coherent enough for applying semantic features. The terminology issue among textbooks affects the ability of the SOM to generate a high quality map for heterogeneous collections.

Originality/value

The authors explored the use of higher‐level document representation features for the development of better quality SOM. In addition the authors have piloted a specific method for evaluating the SOM quality based on the organisation of information content in the map.

Details

Online Information Review, vol. 35 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 12 March 2019

Prafulla Bafna, Shailaja Shirwaikar and Dhanya Pramod

Text mining is growing in importance proportionate to the growth of unstructured data and its applications are increasing day by day from knowledge management to social media…

Abstract

Purpose

Text mining is growing in importance proportionate to the growth of unstructured data and its applications are increasing day by day from knowledge management to social media analysis. Mapping skillset of a candidate and requirements of job profile is crucial for conducting new recruitment as well as for performing internal task allocation in the organization. The automation in the process of selecting the candidates is essential to avoid bias or subjectivity, which may occur while shuffling through thousands of resumes and other informative documents. The system takes skillset in the form of documents to build the semantic space and then takes appraisals or resumes as input and suggests the persons appropriate to complete a task or job position and employees needing additional training. The purpose of this study is to extend the term-document matrix and achieve refined clusters to produce an improved recommendation. The study also focuses on achieving consistency in cluster quality in spite of increasing size of data set, to solve scalability issues.

Design/methodology/approach

In this study, a synset-based document matrix construction method is proposed where semantically similar terms are grouped to reduce the dimension curse. An automated Task Recommendation System is proposed comprising synset-based feature extraction, iterative semantic clustering and mapping based on semantic similarity.

Findings

The first step in knowledge extraction from the unstructured textual data is converting it into structured form either as Term frequency–Inverse document frequency (TF-IDF) matrix or synset-based TF-IDF. Once in structured form, a range of mining algorithms from classification to clustering can be applied. The algorithm gives a better feature vector representation and improved cluster quality. The synset-based grouping and feature extraction for resume data optimizes the candidate selection process by reducing entropy and error and by improving precision and scalability.

Research limitations/implications

The productivity of any organization gets enhanced by assigning tasks to employees with a right set of skills. Efficient recruitment and task allocation can not only improve productivity but also cater to satisfy employee aspiration and identifying training requirements.

Practical implications

Industries can use the approach to support different processes related to human resource management such as promotions, recruitment and training and, thus, manage the talent pool.

Social implications

The task recommender system creates knowledge by following the steps of the knowledge management cycle and this methodology can be adopted in other similar knowledge management applications.

Originality/value

The efficacy of the proposed approach and its enhancement is validated by carrying out experiments on the benchmarked dataset of resumes. The results are compared with existing techniques and show refined clusters. That is Absolute error is reduced by 30 per cent, precision is increased by 20 per cent and dimensions are lowered by 60 per cent than existing technique. Also, the proposed approach solves issue of scalability by producing improved recommendation for 1,000 resumes with reduced entropy.

Details

VINE Journal of Information and Knowledge Management Systems, vol. 49 no. 2
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 11 July 2008

Chimay J. Anumba, Raja R.A. Issa, Jiayi Pan and Ivan Mutis

There is an increasing recognition of the value of effective information and knowledge management (KM) in the construction project delivery process. Many architecture, engineering…

2181

Abstract

Purpose

There is an increasing recognition of the value of effective information and knowledge management (KM) in the construction project delivery process. Many architecture, engineering and construction (AEC) organisations have invested heavily in information technology and KM systems that help in this regard. While these have been largely successful in supporting intra‐organisational business processes, interoperability problems still persist at the project organisation level due to the heterogeneity of the systems used by the different organisations involved. Ontologies are seen as an important means of addressing these problems. The purpose of this paper is to explore the role of ontologies in the construction project delivery process, particularly with respect to information and KM.

Design/methodology/approach

A detailed technical review of the fundamental concepts and related work has been undertaken, with examples and case studies of ontology‐based information and KM presented to illustrate the key concepts. The specific issues and technical difficulties in the design and construction context are highlighted, and the approaches adopted in two ontology‐based applications for the AEC sector are presented.

Findings

The paper concludes that there is considerable merit in ontology‐based approaches to information and KM, but that significant technical challenges remain. Middleware applications, such as semantic web‐based information management system, are contributing in this regard but more needs to be done particularly on integrating or merging ontologies.

Originality/value

The value of the paper lies in the detailed exploration of ontology‐based information and KM within a design and construction context, and the use of appropriate examples and applications to illustrate the key issues.

Details

Construction Innovation, vol. 8 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 21 March 2008

Philipp Mayr, Peter Mutschke and Vivien Petras

The general science portal “vascoda” merges structured, high‐quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a…

Abstract

Purpose

The general science portal “vascoda” merges structured, high‐quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata‐driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value‐added services (treatment of term vagueness and document re‐ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project “Competence Center Modeling and Treatment of Semantic Heterogeneity”.

Design/methodology/approach

Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re‐rank result sets by the following structural properties: the ranking of the results by core journals (so‐called Bradfordizing) and ranking by centrality of authors in co‐authorship networks.

Findings

The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher.

Originality/value

The central impact of the paper focuses on the integration of three structural value‐adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re‐ranking.

Details

Library Review, vol. 57 no. 3
Type: Research Article
ISSN: 0024-2535

Keywords

Article
Publication date: 1 September 2002

John Driver and Panos Louvieris

A marketing‐centric view of the connected enterprise implies that qualitative information in its systems and general document structures share a marketing‐based vocabulary – we…

Abstract

A marketing‐centric view of the connected enterprise implies that qualitative information in its systems and general document structures share a marketing‐based vocabulary – we propose that this should be founded on POSIT. As any system needs to be accessed and understood by people, the basis of its construction and navigation principles should be transparent even though many component processes will be automated. Based on the use of natural language, a user‐defined glossary stems from a selection of primitives and relationships between them. Semantic mapping employing the reciprocal text‐to‐graphical capability of EXPRESS and EXPRESS G is outlined. The significance of XML and related developments is introduced in the context of qualitative information search and extraction from documents. Consensual language also aids connectivity of intranets and extranets to the Internet.

Details

Qualitative Market Research: An International Journal, vol. 5 no. 3
Type: Research Article
ISSN: 1352-2752

Keywords

Article
Publication date: 1 April 2002

Uri Fidelman

It is suggested that the left hemispheric neurons and the magnocellular visual system are specialized in tasks requiring a relatively small number of large neurons having a fast…

Abstract

It is suggested that the left hemispheric neurons and the magnocellular visual system are specialized in tasks requiring a relatively small number of large neurons having a fast reaction time due to a high firing rate or many dendritic synapses of the same neuron which are activated simultaneously. On the other hand the right hemispheric neurons and the neurons of the parvocellular visual system are specialized in tasks requiring a relatively larger number of short term memory (STM) Hebbian engrams (neural networks). This larger number of engrams is achieved by a combination of two strategies. The first is evolving a larger number of neurons, which may be smaller and have a lower firing rate. The second is evolving longer and more branching axons and thus producing more engrams, including engrams comprising neurons located at cortical areas distant from each other. This model explains why verbal functions of the brain are related to the left hemisphere, and the division of semantic tasks between the left hemisphere and the right one. This explanation is extended to other cognitive functions like visual search, ontological cognition, the detection of temporal order, and the dual cognitive interpretation of the perceived physical phenomena.

Details

Kybernetes, vol. 31 no. 3/4
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 5000