Search results

1 – 10 of over 3000
To view the access options for this content please click here
Article

Tho Thanh Quan, Xuan H. Luong , Thanh C. Nguyen and Hui Siu Cheung

Most digital libraries (DL) are now available online. They also provide the Z39.50 standard protocol which allows computer-based systems to effectively retrieve…

Abstract

Purpose

Most digital libraries (DL) are now available online. They also provide the Z39.50 standard protocol which allows computer-based systems to effectively retrieve information stored in the DLs. The major difficulty lies in inconsistency between database schemas of multiple DLs. The purpose of this paper is to present a system known as Argumentation-based Digital Library Search (ADLSearch), which facilitates information retrieval across multiple DLs.

Design/methodology/approach

The proposed approach is based on argumentation theory for schema matching reconciliation from multiple schema matching algorithms. In addition, a distributed architecture is proposed for the ADLSearch system for information retrieval from multiple DLs.

Findings

Initial performance results are promising. First, schema matching can improve the retrieval performance on DLs, as compared to the baseline technique. Subsequently, argumentation-based retrieval can yield better matching accuracy and retrieval efficiency than individual schema matching algorithms.

Research limitations/implications

The work discussed in this paper has been implemented as a prototype supporting scholarly retrieval from about 800 DLs over the world. However, due to complexity of argumentation algorithm, the process of adding new DLs to the system cannot be performed in a real-time manner.

Originality/value

In this paper, an argumentation-based approach is proposed for reconciling the conflicts from multiple schema matching algorithms in the context of information retrieval from multiple DL. Moreover, the proposed approach can also be applied for similar applications which require automatic mapping from multiple database schemas.

Details

Online Information Review, vol. 39 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article

Chao Wang, Jie Lu and Guangquan Zhang

Matching relevant ontology data for integration is vitally important as the amount of ontology data increases along with the evolving Semantic web, in which data are…

Abstract

Purpose

Matching relevant ontology data for integration is vitally important as the amount of ontology data increases along with the evolving Semantic web, in which data are published from different individuals or organizations in a decentralized environment. For any domain that has developed a suitable ontology, its ontology annotated data (or simply ontology data) from different sources often overlaps and needs to be integrated. The purpose of this paper is to develop intelligent web ontology data matching method and framework for data integration.

Design/methodology/approach

This paper develops an intelligent matching method to solve the issue of ontology data matching. Based on the matching method, it also proposes a flexible peer‐to‐peer framework to address the issue of ontology data integration in a distributed Semantic web environment.

Findings

The proposed matching method is different from existing data matching or merging methods applied to data warehouse in that it employs a machine learning approach and more similarity measurements by exploring ontology features.

Research limitations/implications

The proposed method and framework will be further tested for some more complicated real cases in the future.

Originality/value

The experiments show that this proposed intelligent matching method increases ontology data matching accuracy.

Details

International Journal of Web Information Systems, vol. 5 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article

Laurent Remy, Dragan Ivanović, Maria Theodoridou, Athina Kritsotaki, Paul Martin, Daniele Bailo, Manuela Sbarra, Zhiming Zhao and Keith Jeffery

The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue…

Abstract

Purpose

The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable researchers to solve problems or analyse phenomena that require a view across several scientific domains.

Design/methodology/approach

There are two main approaches for integrating metadata catalogues provided by different e-science research infrastructures (e-RIs): centralised and distributed. The authors decided to implement a central metadata catalogue that describes, provides access to and records actions on the assets of a number of e-RIs participating in the system. The authors chose the CERIF data model for description of assets available via the integrated catalogue. Analysis of popular metadata formats used in e-RIs has been conducted, and mappings between popular formats and the CERIF data model have been defined using an XML-based tool for description and automatic execution of mappings.

Findings

An integrated catalogue of research assets metadata has been created. Metadata from e-RIs supporting Dublin Core, ISO 19139, DCAT-AP, EPOS-DCAT-AP, OIL-E and CKAN formats can be integrated into the catalogue. Metadata are stored in CERIF RDF in the integrated catalogue. A web portal for searching this catalogue has been implemented.

Research limitations/implications

Only five formats are supported at this moment. However, description of mappings between other source formats and the target CERIF format can be defined in the future using the 3M tool, an XML-based tool for describing X3ML mappings that can then be automatically executed on XML metadata records. The approach and best practices described in this paper can thus be applied in future mappings between other metadata formats.

Practical implications

The integrated catalogue is a part of the eVRE prototype, which is a result of the VRE4EIC H2020 project.

Social implications

The integrated catalogue should boost the performance of multi-disciplinary research; thus it has the potential to enhance the practice of data science and so contribute to an increasingly knowledge-based society.

Originality/value

A novel approach for creation of the integrated catalogue has been defined and implemented. The approach includes definition of mappings between various formats. Defined mappings are effective and shareable.

Details

The Electronic Library, vol. 37 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article

Stefan Dietze, Salvador Sanchez‐Alonso, Hannes Ebner, Hong Qing Yu, Daniela Giordano, Ivana Marenzi and Bernardo Pereira Nunes

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This…

Abstract

Purpose

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has led to a fragmented landscape of competing metadata schemas, or interface mechanisms. More recently, semantic technologies were taken into account to improve interoperability. The linked data approach has emerged as the de facto standard for sharing data on the web. To this end, it is obvious that the application of linked data principles offers a large potential to solve interoperability issues in the field of TEL. This paper aims to address this issue.

Design/methodology/approach

In this paper, approaches are surveyed that are aimed towards a vision of linked education, i.e. education which exploits educational web data. It particularly considers the exploitation of the wealth of already existing TEL data on the web by allowing its exposure as linked data and by taking into account automated enrichment and interlinking techniques to provide rich and well‐interlinked data for the educational domain.

Findings

So far web‐scale integration of educational resources is not facilitated, mainly due to the lack of take‐up of shared principles, datasets and schemas. However, linked data principles increasingly are recognized by the TEL community. The paper provides a structured assessment and classification of existing challenges and approaches, serving as potential guideline for researchers and practitioners in the field.

Originality/value

Being one of the first comprehensive surveys on the topic of linked data for education, the paper has the potential to become a widely recognized reference publication in the area.

To view the access options for this content please click here
Article

Tayybah Kiren and Muhammad Shoaib

Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many…

Abstract

Purpose

Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched.

Design/methodology/approach

Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson’s correlation coefficient and IR measures precision, recall and F-measure.

Findings

Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed.

Originality/value

On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.

Details

Aslib Journal of Information Management, vol. 68 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

To view the access options for this content please click here
Article

Timothy W. Cole, Myung-Ja K. Han, Maria Janina Sarol, Monika Biel and David Maus

Early Modern emblem books are primary sources for scholars studying the European Renaissance. Linked Open Data (LOD) is an approach for organizing and modeling information…

Abstract

Purpose

Early Modern emblem books are primary sources for scholars studying the European Renaissance. Linked Open Data (LOD) is an approach for organizing and modeling information in a data-centric manner compatible with the emerging Semantic Web. The purpose of this paper is to examine ways in which LOD methods can be applied to facilitate emblem resource discovery, better reveal the structure and connectedness of digitized emblem resources, and enhance scholar interactions with digitized emblem resources.

Design/methodology/approach

This research encompasses an analysis of the existing XML-based Spine (emblem-specific) metadata schema; the design of a new, domain-specific, Resource Description Framework compatible ontology; the mapping and transformation of metadata from Spine to both the new ontology and (separately) to the pre-existing Schema.org ontology; and the (experimental) modification of the Emblematica Online portal as a proof of concept to illustrate enhancements supported by LOD.

Findings

LOD is viable as an approach for facilitating discovery and enhancing the value to scholars of digitized emblem books; however, metadata must first be enriched with additional uniform resource identifiers and the workflow upgrades required to normalize and transform existing emblem metadata are substantial and still to be fully worked out.

Practical implications

The research described demonstrates the feasibility of transforming existing, special collections metadata to LOD. Although considerable work and further study will be required, preliminary findings suggest potential benefits of LOD for both users and libraries.

Originality/value

This research is unique in the context of emblem studies and adds to the emerging body of work examining the application of LOD best practices to library special collections.

Details

Library Hi Tech, vol. 35 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article

Tseng-Lung Huang and Yi-Mu Chen

– This study aims to determine whether smartphones create the best communication fit with a young audience.

Abstract

Purpose

This study aims to determine whether smartphones create the best communication fit with a young audience.

Design/methodology/approach

To validate the hypotheses, a task-based laboratory study was conducted. And smartphone film and television (TV) film were provided in the laboratory. Young respondents were recruited in the classroom and brief introduction and film were broadcasted. After watching the film, levels of respondent’s emotional experience was measured via questionnaire.

Findings

The results indicate that when the text of the film matches the young audience’s schema, the young audience uses, mainly, imagery coding to interpret the text and achieve an emotional experience. Conversely, when the text and schema do not match, the young audience uses both proposition coding and imagery coding.

Practical implications

Based on the results found in this study, companies should use different texts to match the different schema of young audiences to ensure that audiences can process coding and enjoy emotional experiences when using smartphone.

Originality/value

Dual-coding theory is applied to determine which coding system the audience use to interpret the new-media text, such as smartphone films.

Details

Young Consumers, vol. 15 no. 2
Type: Research Article
ISSN: 1747-3616

Keywords

To view the access options for this content please click here
Article

Chimay J. Anumba, Raja R.A. Issa, Jiayi Pan and Ivan Mutis

There is an increasing recognition of the value of effective information and knowledge management (KM) in the construction project delivery process. Many architecture…

Abstract

Purpose

There is an increasing recognition of the value of effective information and knowledge management (KM) in the construction project delivery process. Many architecture, engineering and construction (AEC) organisations have invested heavily in information technology and KM systems that help in this regard. While these have been largely successful in supporting intra‐organisational business processes, interoperability problems still persist at the project organisation level due to the heterogeneity of the systems used by the different organisations involved. Ontologies are seen as an important means of addressing these problems. The purpose of this paper is to explore the role of ontologies in the construction project delivery process, particularly with respect to information and KM.

Design/methodology/approach

A detailed technical review of the fundamental concepts and related work has been undertaken, with examples and case studies of ontology‐based information and KM presented to illustrate the key concepts. The specific issues and technical difficulties in the design and construction context are highlighted, and the approaches adopted in two ontology‐based applications for the AEC sector are presented.

Findings

The paper concludes that there is considerable merit in ontology‐based approaches to information and KM, but that significant technical challenges remain. Middleware applications, such as semantic web‐based information management system, are contributing in this regard but more needs to be done particularly on integrating or merging ontologies.

Originality/value

The value of the paper lies in the detailed exploration of ontology‐based information and KM within a design and construction context, and the use of appropriate examples and applications to illustrate the key issues.

Details

Construction Innovation, vol. 8 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Content available
Article

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these…

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2210-8327

Keywords

To view the access options for this content please click here
Article

Keng Hoon Gan and Keat Keong Phang

When accessing structured contents in XML form, information requests are formulated in the form of special query languages such as NEXI, Xquery, etc. However, it is not…

Abstract

Purpose

When accessing structured contents in XML form, information requests are formulated in the form of special query languages such as NEXI, Xquery, etc. However, it is not easy for end users to compose such information requests using these special queries because of their complexities. Hence, the purpose of this paper is to automate the construction of such queries from common query like keywords or form-based queries.

Design/methodology/approach

In this paper, the authors address the problem of constructing queries for XML retrieval by proposing a semantic-syntax query model that can be used to construct different types of structured queries. First, a generic query structure known as semantic query structure is designed to store query contents given by user. Then, generation of a target language is carried out by mapping the contents in semantic query structure to query syntax templates stored in knowledge base.

Findings

Evaluations were carried out based on how well information needs are captured and transformed into a target query language. In summary, the proposed model is able to express information needs specified using query like NEXI. Xquery records a lower percentage because of its language complexity. The authors also achieve satisfactory query construction rate with an example-based method, i.e. 86 per cent (for NEXI IMDB topics) and 87 per cent (NEXI Wiki topics), respectively, compare to benchmark of 78 per cent by Sumita and Iida in language translation.

Originality/value

The proposed semantic-syntax query model allows flexibility of accommodating new query language by separating the semantic of query from its syntax.

Details

International Journal of Web Information Systems, vol. 13 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 3000