Search results

1 – 10 of over 77000
Article
Publication date: 6 November 2017

Ademar Crotti Junior, Christophe Debruyne, Rob Brennan and Declan O’Sullivan

This paper aims to evaluate the state-of-the-art in CSV uplift tools. Based on this evaluation, a method that incorporates data transformations into uplift mapping languages by…

Abstract

Purpose

This paper aims to evaluate the state-of-the-art in CSV uplift tools. Based on this evaluation, a method that incorporates data transformations into uplift mapping languages by means of functions is proposed and evaluated. Typically, tools that map non-resource description framework (RDF) data into RDF format rely on the technology native to the source of the data when data transformation is required. Depending on the data format, data manipulation can be performed using underlying technology, such as relational database management system (RDBMS) for relational databases or XPath for XML. For CSV/Tabular data, there is no such underlying technology, and instead, it requires either a transformation of source data into another format or pre/post-processing techniques.

Design/methodology/approach

To evaluate the state-of-the-art in CSV uplift tools, the authors present a comparison framework and have applied it to such tools. A key feature evaluated in the comparison framework is data transformation functions. They argue that existing approaches for transformation functions are complex – in that a number of steps and tools are required. The proposed method, FunUL, in contrast, defines functions independent of the source data being mapped into RDF, as resources within the mapping itself.

Findings

The approach was evaluated using two typical real-world use cases. The authors have compared how well our approach and others (that include transformation functions as part of the uplift mapping) could implement an uplift mapping from CSV/Tabular into RDF. This comparison indicates that the authors’ approach performs well for these use cases.

Originality/value

This paper presents a comparison framework and applies it to the state-of-the-art in CSV uplift tools. Furthermore, the authors describe FunUL, which, unlike other related work, defines functions as resources within the uplift mapping itself, integrating data transformation functions and mapping definitions. This makes the generation of RDF from source data transparent and traceable. Moreover, as functions are defined as resources, these can be reused multiple times within mappings.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 June 2013

Guy A. Bingham and Richard Hague

The purpose of this paper is to investigate, develop and validate a three‐dimensional modelling strategy for the efficient generation of conformal textile data suitable for…

1093

Abstract

Purpose

The purpose of this paper is to investigate, develop and validate a three‐dimensional modelling strategy for the efficient generation of conformal textile data suitable for additive manufacture.

Design/methodology/approach

A series of additive manufactured (AM) textiles samples were modelled using currently available computer‐aided design software to understand the limitations associated with the generation of conformal data. Results of the initial three‐dimensional modelling processes informed the exploration and development of a new dedicated efficient modelling strategy that was tested to understand its capabilities.

Findings

The research demonstrates the dramatically improved capabilities of the developed three‐dimensional modelling strategy, over existing approaches by accurately mapping complex geometries described as STL data to a mapping mesh without distortion and correctly matching the orientation and surface normal.

Originality/value

To date the generation of data for AM textiles has been seen as a manual and time‐consuming process. The research presents a new dedicated methodology for the efficient generation of complex and conformal AM textile data that will underpin further research in this area.

Details

Rapid Prototyping Journal, vol. 19 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 20 June 2008

Kai Yang, Amanda Lo and Robert Steele

The purpose of this paper is to address problems that exist in the context of XML to ontology translation. Existing research results dealing with XML to ontology translation do…

Abstract

Purpose

The purpose of this paper is to address problems that exist in the context of XML to ontology translation. Existing research results dealing with XML to ontology translation do not facilitate bidirectional data translation due to the fundamental differences between XML schema and ontologies. This paper proposes a mapping representation ontology for modeling concept mappings defined between XML schema and ontology, enabling data translation without any information loss.

Design/methodology/approach

A two‐step compensation approach is proposed that aims to prevent the loss of data type, structural and relational information during any single trip data translation. The mapping representation ontology proposed is capable in capturing enough information to compensate the loss of information during translation, hence allowing bidirectional conversions between XML and ontology.

Findings

Fundamental differences between XML schema and ontology are identified as the main reason causing the loss of information during data translation. A compensation approach that captures a sufficient amount of concept mapping information data translation is found to be successful in enabling lossless data transformation.

Practical implications

Outcomes from this work allow for the seamless data translation between XML documents, it demonstrates how web applications can seamlessly communicate and exchange data with each other without the need to conform to a predefined data standard. This paper aims to enhance interoperability between distributed systems.

Originality/value

This paper presents a mapping ontology that captures concept mappings defined between XML schema and ontology. Two algorithms facilitating the bidirectional XML to ontology translation are also proposed.

Details

International Journal of Web Information Systems, vol. 4 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 May 1999

Sunny Baker and Kim Baker

You can collect huge masses of data about your customers, but if you don't sort it, it won't do you any good. Mapping software may be just what you need.

Abstract

You can collect huge masses of data about your customers, but if you don't sort it, it won't do you any good. Mapping software may be just what you need.

Details

Journal of Business Strategy, vol. 20 no. 5
Type: Research Article
ISSN: 0275-6668

Article
Publication date: 28 October 2014

Kyle Dillon Feuz and Diane J. Cook

The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require…

Abstract

Purpose

The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require information about the activities currently being performed, but activity recognition algorithms typically require substantial amounts of labeled training data for each setting. One solution to this problem is to leverage transfer learning techniques to reuse available labeled data in new situations.

Design/methodology/approach

This paper introduces three novel heterogeneous transfer learning techniques that reverse the typical transfer model and map the target feature space to the source feature space and apply them to activity recognition in a smart apartment. This paper evaluates the techniques on data from 18 different smart apartments located in an assisted-care facility and compares the results against several baselines.

Findings

The three transfer learning techniques are all able to outperform the baseline comparisons in several situations. Furthermore, the techniques are successfully used in an ensemble approach to achieve even higher levels of accuracy.

Originality/value

The techniques in this paper represent a considerable step forward in heterogeneous transfer learning by removing the need to rely on instance – instance or feature – feature co-occurrence data.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 26 May 2021

Yuhan Luo and Mingwei Lin

The purpose of this paper is to make an overview of 474 publications and 512 patents of FTL from 1987 to 2020 in order to provide a conclusive and comprehensive analysis for…

Abstract

Purpose

The purpose of this paper is to make an overview of 474 publications and 512 patents of FTL from 1987 to 2020 in order to provide a conclusive and comprehensive analysis for researchers in this field, as well as a preliminary knowledge of FTL for interested researchers.

Design/methodology/approach

Firstly, the FTL algorithms are classified and its functions are introduced in detail. Secondly, the structures of the publications are analyzed in terms of the fundamental information and the publication of the most productive countries/regions, institutions and authors. After that, co-citation networks of institutions, authors and papers illustrated by VOS Viewer are given to show the relationship among those and the most influential of them is further analyzed. Then, the characteristics of the patent are analyzed based on the basic information and classification of the patent and the most productive inventors. In order to obtain research hotspots and trends in this field, the time-line review and citation burst detection of keywords carried out by Cite Space are made to be visual. Finally, based on the above analysis, it draws some other important conclusions and the development trend of this field.

Findings

The research on FTL algorithm is still the top priority in the future, and how to improve the performance of SSD in the era of big data is one of the research hotspots.

Research limitations/implications

This paper makes a comprehensive analysis of FTL with the method of bibliometrics, and it is valuable for researchers can quickly grasp the hotspots in this area.

Originality/value

This article draws the structural characteristics of the publications in this field and summarizes the research hotspots and trends in this field in recent years, aiming to inspire new ideas for researchers.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 19 July 2005

Devi R. Gnyawali and Beverly B. Tyler

Our primary objective is to provide method-related broad guidelines to researchers on the entire spectrum of issues involved in cause mapping and to encourage researchers to use…

Abstract

Our primary objective is to provide method-related broad guidelines to researchers on the entire spectrum of issues involved in cause mapping and to encourage researchers to use causal mapping techniques in strategy research. We challenge strategists to open the black box and investigate the mental models that depict the cause and effect beliefs of managers, “walk” readers through the causal mapping process by discussing the “nuts and bolts” of cause mapping, provide an illustration, and outline “key issues to consider.” We conclude with a discussion of some promising research directions.

Details

Research Methodology in Strategy and Management
Type: Book
ISBN: 978-0-76231-208-5

Article
Publication date: 19 October 2015

Eugene Ch'ng

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal, longitudinal…

Abstract

Purpose

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal, longitudinal data sets from social media. The paper constructs social information landscapes (SIL).

Design/methodology/approach

The research presented here adopts a Big Data methodological approach for mapping user-generated contents in social media. The methodology and algorithms presented are generic, and can be applied to diverse types of social media or user-generated contents involving user interactions, such as within blogs, comments in product pages, and other forms of media, so long as a formal data structure proposed here can be constructed.

Findings

The limited presentation of the sequential nature of content listings within social media and Web 2.0 pages, as viewed on web browsers or on mobile devices, do not necessarily reveal nor make obvious an unknown nature of the medium; that every participant, from content producers, to consumers, to followers and subscribers, including the contents they produce or subscribed to, are intrinsically connected in a hidden but massive network. Such networks when mapped, could be quantitatively analysed using social network analysis (e.g. centralities), and the semantics and sentiments could equally reveal valuable information with appropriate analytics. Yet that which is difficult is the traditional approach of collecting, cleaning, collating, and mapping such data sets into a sufficiently large sample of data that could yield important insights into the community structure and the directional, and polarity of interaction on diverse topics. This research solves this particular strand of problem.

Research limitations/implications

The automated mapping of extremely large networks involving hundreds of thousands to millions of nodes, encapsulating high resolution and contextual information, over a long period of time could possibly assist in the proving or even disproving of theories. The goal of this paper is to demonstrate the feasibility of using automated approaches for acquiring massive, connected data sets for academic inquiry in the social sciences.

Practical implications

The methods presented in this paper, together with the Big Data architecture can assist individuals and institutions with a limited budget, with practical approaches in constructing SIL. The software-hardware integrated architecture uses open source software, furthermore, the SIL mapping algorithms are easy to implement.

Originality/value

The majority of research in the literature uses traditional approaches for collecting social networks data. Traditional approaches can be slow and tedious; they do not yield adequate sample size to be of significant value for research. Whilst traditional approaches collect only a small percentage of data, the original methods presented here are able to collect and collate entire data sets in social media due to the automated and scalable mapping techniques.

Details

Industrial Management & Data Systems, vol. 115 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 4 October 2019

Laurent Remy, Dragan Ivanović, Maria Theodoridou, Athina Kritsotaki, Paul Martin, Daniele Bailo, Manuela Sbarra, Zhiming Zhao and Keith Jeffery

The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable…

Abstract

Purpose

The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable researchers to solve problems or analyse phenomena that require a view across several scientific domains.

Design/methodology/approach

There are two main approaches for integrating metadata catalogues provided by different e-science research infrastructures (e-RIs): centralised and distributed. The authors decided to implement a central metadata catalogue that describes, provides access to and records actions on the assets of a number of e-RIs participating in the system. The authors chose the CERIF data model for description of assets available via the integrated catalogue. Analysis of popular metadata formats used in e-RIs has been conducted, and mappings between popular formats and the CERIF data model have been defined using an XML-based tool for description and automatic execution of mappings.

Findings

An integrated catalogue of research assets metadata has been created. Metadata from e-RIs supporting Dublin Core, ISO 19139, DCAT-AP, EPOS-DCAT-AP, OIL-E and CKAN formats can be integrated into the catalogue. Metadata are stored in CERIF RDF in the integrated catalogue. A web portal for searching this catalogue has been implemented.

Research limitations/implications

Only five formats are supported at this moment. However, description of mappings between other source formats and the target CERIF format can be defined in the future using the 3M tool, an XML-based tool for describing X3ML mappings that can then be automatically executed on XML metadata records. The approach and best practices described in this paper can thus be applied in future mappings between other metadata formats.

Practical implications

The integrated catalogue is a part of the eVRE prototype, which is a result of the VRE4EIC H2020 project.

Social implications

The integrated catalogue should boost the performance of multi-disciplinary research; thus it has the potential to enhance the practice of data science and so contribute to an increasingly knowledge-based society.

Originality/value

A novel approach for creation of the integrated catalogue has been defined and implemented. The approach includes definition of mappings between various formats. Defined mappings are effective and shareable.

Details

The Electronic Library, vol. 37 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 4 November 2014

Ningning Kong, Tao Zhang and Ilana Stonebraker

The purpose of this paper is to establish common metrics for web-based mapping applications to facilitate user decision making and enhance information providers’ product design…

Abstract

Purpose

The purpose of this paper is to establish common metrics for web-based mapping applications to facilitate user decision making and enhance information providers’ product design.

Design/methodology/approach

The metrics were developed from a combination of literature review and case studies. From the literature review, the authors identified three major areas of assessment for web-based mapping applications. The authors then studied six online applications to refine the metrics.

Findings

The results suggest that web-based mapping applications can be evaluated from three major aspects: data content, geographic information systems (GIS) functionality and usability. The authors have developed detailed measures for each factor through the evaluation of the six applications.

Practical implications

The metrics developed from this study could be used as a standard for online spatial information users to choose appropriate products according to their needs. It can also provide valuable information for data providers to improve their products.

Originality/value

To the best of the authors’ knowledge, this is the first study that has systematically examined web-based mapping applications in academic libraries. Results from this study could be a valuable tool for librarians as well as general information users without background of GIS and usability to evaluate online mapping resources.

Details

Online Information Review, vol. 38 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 77000