Search results

1 – 10 of over 213000
Article
Publication date: 1 August 1993

Vincent‐Wayne Mitchell and Yan E. Volking

Observes that information is becoming the most powerful of modernbusiness tools and, as companies internationalize, managers are going tobe faced with more to handle. Discusses…

Abstract

Observes that information is becoming the most powerful of modern business tools and, as companies internationalize, managers are going to be faced with more to handle. Discusses Senn′s properties of information and presents an analytical tool for managers to use when presented with new, or old untested, data sources. The simple framework is designed to allow managers to highlight problems with data sources quickly and consistently, to take corrective action or to make decisions with more awareness of the limitations of the data.

Details

Management Decision, vol. 31 no. 8
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 3 April 2017

Adrian Burton, Hylke Koers, Paolo Manghi, Sandro La Bruzzo, Amir Aryani, Michael Diepenbroek and Uwe Schindler

Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for…

1490

Abstract

Purpose

Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These operate in silos so that content cannot be readily combined to deliver a network graph connecting research data and literature in a comprehensive and reliable way. The Research Data Alliance (RDA) Publishing Data Services Working Group (PDS-WG) aims to address this issue of fragmentation by bringing together different stakeholders to agree on a common infrastructure for sharing links between datasets and literature. The paper aims to discuss these issues.

Design/methodology/approach

This paper presents the synergic effort of the RDA PDS-WG and the OpenAIRE infrastructure toward enabling a common infrastructure for exchanging data-literature links by realizing and operating the Data-Literature Interlinking (DLI) Service. The DLI Service populates and provides access to a graph of data set-literature links (at the time of writing close to five million, and growing) collected from a variety of major data centers, publishers, and research organizations.

Findings

To achieve its objectives, the Service proposes an interoperable exchange data model and format, based on which it collects and publishes links, thereby offering the opportunity to validate such common approach on real-case scenarios, with real providers and consumers. Feedback of these actors will drive continuous refinement of the both data model and exchange format, supporting the further development of the Service to become an essential part of a universal, open, cross-platform, cross-discipline solution for collecting, and sharing data set-literature links.

Originality/value

This realization of the DLI Service is the first technical, cross-community, and collaborative effort in the direction of establishing a common infrastructure for facilitating the exchange of data set-literature links. As a result of its operation and underlying community effort, a new activity, name Scholix, has been initiated involving the technological level stakeholders such as DataCite and CrossRef.

Details

Program, vol. 51 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 30 October 2018

Daniel Kaltenthaler, Johannes-Y. Lohrer, Florian Richter and Peer Kröger

Interdisciplinary linkage of information is an emerging topic to create knowledge by collaboration of experts in diverse domains. New insights can be found by using the combined…

Abstract

Purpose

Interdisciplinary linkage of information is an emerging topic to create knowledge by collaboration of experts in diverse domains. New insights can be found by using the combined techniques and information when people have the chance to discuss and communicate on a common basis.

Design/methodology/approach

This paper describes RMS Cloud, an information management system which allows distributed data sources to be searched using dynamic joins of results from heterogeneous data formats. It is based on the well-known Mediator architecture, but reverses the connection of the data sources to grant data owners full control over the data.

Findings

Data owners and learners are enabled to retrieve information and to cross-connect domain-extrinsic knowledge and enhances collaborative learning with a search interface that is intuitive and easy to operate.

Originality/value

This novel architecture is able to connect to differently shaped data sources from interdisciplinary domains into one common retrieval interface.

Details

Journal of Information, Communication and Ethics in Society, vol. 16 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 8 June 2015

Lihua Lu, Hengzhen Zhang and Xiao-Zhi Gao

Data integration is to combine data residing at different sources and to provide the users with a unified interface of these data. An important issue on data integration is the…

Abstract

Purpose

Data integration is to combine data residing at different sources and to provide the users with a unified interface of these data. An important issue on data integration is the existence of conflicts among the different data sources. Data sources may conflict with each other at data level, which is defined as data inconsistency. The purpose of this paper is to aim at this problem and propose a solution for data inconsistency in data integration.

Design/methodology/approach

A relational data model extended with data source quality criteria is first defined. Then based on the proposed data model, a data inconsistency solution strategy is provided. To accomplish the strategy, fuzzy multi-attribute decision-making (MADM) approach based on data source quality criteria is applied to obtain the results. Finally, users feedbacks strategies are proposed to optimize the result of fuzzy MADM approach as the final data inconsistent solution.

Findings

To evaluate the proposed method, the data obtained from the sensors are extracted. Some experiments are designed and performed to explain the effectiveness of the proposed strategy. The results substantiate that the solution has a better performance than the other methods on correctness, time cost and stability indicators.

Practical implications

Since the inconsistent data collected from the sensors are pervasive, the proposed method can solve this problem and correct the wrong choice to some extent.

Originality/value

In this paper, for the first time the authors study the effect of users feedbacks on integration results aiming at the inconsistent data.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 8 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 October 2014

Kyle Dillon Feuz and Diane J. Cook

The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require…

Abstract

Purpose

The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require information about the activities currently being performed, but activity recognition algorithms typically require substantial amounts of labeled training data for each setting. One solution to this problem is to leverage transfer learning techniques to reuse available labeled data in new situations.

Design/methodology/approach

This paper introduces three novel heterogeneous transfer learning techniques that reverse the typical transfer model and map the target feature space to the source feature space and apply them to activity recognition in a smart apartment. This paper evaluates the techniques on data from 18 different smart apartments located in an assisted-care facility and compares the results against several baselines.

Findings

The three transfer learning techniques are all able to outperform the baseline comparisons in several situations. Furthermore, the techniques are successfully used in an ensemble approach to achieve even higher levels of accuracy.

Originality/value

The techniques in this paper represent a considerable step forward in heterogeneous transfer learning by removing the need to rely on instance – instance or feature – feature co-occurrence data.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 18 July 2022

Shivani Vaid

Introduction: With the proliferation and amalgamation of technology and the emergence of artificial intelligence and the internet of things, society is now facing a rapid…

Abstract

Introduction: With the proliferation and amalgamation of technology and the emergence of artificial intelligence and the internet of things, society is now facing a rapid explosion in big data. However, this explosion needs to be handled with care. Ethically managing big data is of great importance. If left unmanageable, it can create a bubble of data waste and not help society achieve human well-being, sustainable economic growth, and development.

Purpose: This chapter aims to understand different perspectives of big data. One philosophy of big data is defined by its volume and versatility, with an annual increase of 40% per annum. The other view represents its capability in dealing with multiple global issues fuelling innovation. This chapter will also offer insight into various ways to deal with societal problems, provide solutions to achieve economic growth, and aid vulnerable sections via sustainable development goals (SDGs).

Methodology: This chapter attempts to lay out a review of literature related to big data. It examines the implication that the big data pool potentially influences ideas and policies to achieve SDGs. Also, different techniques associated with collecting big data and an assortment of significant data sources are analysed in the context of achieving sustainable economic development and growth.

Findings: This chapter presents a list of challenges linked with big data analytics in governance and achievement of SDG. Different ways to deal with the challenges in using big data will also be addressed.

Details

Big Data Analytics in the Insurance Market
Type: Book
ISBN: 978-1-80262-638-4

Keywords

Article
Publication date: 25 October 2022

Samir Sellami and Nacer Eddine Zarour

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in…

Abstract

Purpose

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in different Web segments in a unified way is becoming a core task for a variety of users’ and companies’ scenarios. However, knowledge creation and exploration from distributed Web data sources is a challenging task. Several data integration conflicts need to be resolved and the knowledge needs to be visualized in an intuitive manner. The purpose of this paper is to extend the authors’ previous integration works to address semantic knowledge exploration of enterprise data combined with heterogeneous social and linked Web data sources.

Design/methodology/approach

The authors synthesize information in the form of a knowledge graph to resolve interoperability conflicts at integration time. They begin by describing KGMap, a mapping model for leveraging knowledge graphs to bridge heterogeneous relational, social and linked web data sources. The mapping model relies on semantic similarity measures to connect the knowledge graph schema with the sources' metadata elements. Then, based on KGMap, this paper proposes KeyFSI, a keyword-based semantic search engine. KeyFSI provides a responsive faceted navigating Web user interface designed to facilitate the exploration and visualization of embedded data behind the knowledge graph. The authors implemented their approach for a business enterprise data exploration scenario where inputs are retrieved on the fly from a local customer relationship management database combined with the DBpedia endpoint and the Facebook Web application programming interface (API).

Findings

The authors conducted an empirical study to test the effectiveness of their approach using different similarity measures. The observed results showed better efficiency when using a semantic similarity measure. In addition, a usability evaluation was conducted to compare KeyFSI features with recent knowledge exploration systems. The obtained results demonstrate the added value and usability of the contributed approach.

Originality/value

Most state-of-the-art interfaces allow users to browse one Web segment at a time. The originality of this paper lies in proposing a cost-effective virtual on-demand knowledge creation approach, a method that enables organizations to explore valuable knowledge across multiple Web segments simultaneously. In addition, the responsive components implemented in KeyFSI allow the interface to adequately handle the uncertainty imposed by the nature of Web information, thereby providing a better user experience.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 14 August 2017

Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

2046

Abstract

Purpose

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

Design/methodology/approach

In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.

Findings

Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.

Originality/value

To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.

Details

PSU Research Review, vol. 1 no. 2
Type: Research Article
ISSN: 2399-1747

Keywords

Open Access
Article
Publication date: 19 November 2021

Cass Shum, Jaimi Garlington, Ankita Ghosh and Seyhmus Baloglu

This study aims to describe the development of hospitality research in terms of research methods and data sources used in the 2010s.

2093

Abstract

Purpose

This study aims to describe the development of hospitality research in terms of research methods and data sources used in the 2010s.

Design/methodology/approach

Content analyses of the research methods and data sources used in original hospitality research published in the 2010s in the Cornell Hospitality Quarterly (CQ), International Journal of Hospitality Management (IJHM), International Journal of Contemporary Hospitality Management (IJCHM), Journal of Hospitality and Tourism Research (JHTR) and International Hospitality Review (IHR) were conducted. It describes whether the time span, functional areas and geographic regions of data sources were related to the research methods and data sources.

Findings

Results from 2,759 original hospitality empirical articles showed that marketing research used various research methods and data sources. Most finance articles used archival data, while most human resources articles used survey designs with organizational data. In addition, only a small amount of research used data from Oceania, Africa and Latin America.

Research limitations/implications

This study sheds some light on the development of hospitality research in terms of research method and data source usage. However, it only focused on five English-based journals from 2010–2019. Therefore, future studies may seek to understand the impact of the COVID-19 pandemic on research methods and data source usage in hospitality research.

Originality/value

This is the first study to examine five hospitality journals' research methods and data sources used in the last decade. It sheds light on the development of hospitality research in the previous decade and identifies new hospitality research avenues.

Details

International Hospitality Review, vol. 37 no. 2
Type: Research Article
ISSN: 2516-8142

Keywords

Article
Publication date: 21 August 2017

Kwasi Gyau Baffour Awuah, Frank Gyamfi-Yeboah, David Proverbs and Jessica Elizabeth Lamond

Adequate reliable property market data are critical to the production of professional and ethical valuations as well as better real estate transaction decision-making. However…

Abstract

Purpose

Adequate reliable property market data are critical to the production of professional and ethical valuations as well as better real estate transaction decision-making. However, the availability of reliable property market information represents a major barrier to improving valuation practices in Ghana and it is regarded as a key challenge. The purpose of this paper is to investigate the sources and reliability of property market information for valuation practice in Ghana. The aim is to provide input into initiatives to address the availability of reliable property market data challenges.

Design/methodology/approach

A mixed methods research approach is used. The study, thus, relies on a combination of a systematic identification and review of literature, a stakeholder workshop and a questionnaire survey of real estate valuers in Accra, Ghana’s capital city to obtain requisite data to address the aim.

Findings

The study identifies seven property market data sources used by valuers to obtain market data for valuation practice. These are: valuers own database; public institutions; professional colleagues; property owners; estate developers; estate agents; and the media. However, access to property market information for valuations is a challenge although valuers would like to use reliable market data for their valuations. This is due to incomplete and scattered nature of data often borne out of administrative lapses; non-disclosure of details of property transactions due to confidentiality arrangements and the quest to evade taxes; data integrity concerns; and lack of requisite training and experience especially for estate agents to collect and manage market data. Although professional colleagues is the most used market data source, valuers own databases, was regarded as the most reliable source compared to the media, which was considered as the least reliable source.

Research limitations/implications

Findings from the study imply a need for the development of a systematic approach to property market data collection and management. This will require practitioners to demonstrate care, consciousness and a set of data collection skills suggesting a need for valuers and estate agents to undergo regular relevant training to develop and enhance their knowledge, skills and capabilities. The establishment of a property market databank to help in the provision of reliable market data along with a suitable market data collection template to ensure effective and efficient data collection are considered essential steps.

Originality/value

The study makes a significant contribution to the extant knowledge by providing empirical evidence on the frequency of use and the reliability of the various sources of market data. It also provides useful insights for regulators such as the Ghana Institution of Surveyors (GhIS), the Royal Institution of Chartered Surveyors (RICS) and other stakeholders such as the Commonwealth Association of Surveying and Land Economy (CASLE) and the Government to improve the provision of reliable property market information towards developing valuation practice not only in Ghana, but across the Sub-Saharan Africa Region. Also, based on these findings, the study proposes a new property market data collection template and guidelines towards improving the collection of effective property market data. Upon refinement, these could aid valuation practitioners to collect reliable property market data to improve valuation practice.

Details

Property Management, vol. 35 no. 4
Type: Research Article
ISSN: 0263-7472

Keywords

1 – 10 of over 213000