Search results

1 – 10 of 287
Content available
Article
Publication date: 1 April 2006

81

Abstract

Details

Assembly Automation, vol. 26 no. 2
Type: Research Article
ISSN: 0144-5154

Open Access
Article
Publication date: 31 October 2022

Sunday Adewale Olaleye, Emmanuel Mogaji, Friday Joseph Agbo, Dandison Ukpabi and Akwasi Gyamerah Adusei

The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human…

2099

Abstract

Purpose

The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human experiences into behavioral data that can be harnessed in the marketing sphere. This study aims to focus on investigating the domain of data economy with the methodological lens of quantitative bibliometric analysis of published literature.

Design/methodology/approach

The bibliometric analysis seeks to unravel trends and timelines for the emergence of the data economy, its conceptualization, scientific progression and thematic synergy that could predict the future of the field. A total of 591 data between 2008 and June 2021 were used in the analysis with the Biblioshiny app on the web interfaced and VOSviewer version 1.6.16 to analyze data from Web of Science and Scopus.

Findings

This study combined findable, accessible, interoperable and reusable (FAIR) data and data economy and contributed to the literature on big data, information discovery and delivery by shedding light on the conceptual, intellectual and social structure of data economy and demonstrating data relevance as a key strategic asset for companies and academia now and in the future.

Research limitations/implications

Findings from this study provide a steppingstone for researchers who may engage in further empirical and longitudinal studies by employing, for example, a quantitative and systematic review approach. In addition, future research could expand the scope of this study beyond FAIR data and data economy to examine aspects such as theories and show a plausible explanation of several phenomena in the emerging field.

Practical implications

The researchers can use the results of this study as a steppingstone for further empirical and longitudinal studies.

Originality/value

This study confirmed the relevance of data to society and revealed some gaps to be undertaken for the future.

Details

Information Discovery and Delivery, vol. 51 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 13 June 2023

Mikael Laakso

Science policy and practice for open access (OA) books is a rapidly evolving area in the scholarly domain. However, there is much that remains unknown, including how many OA books…

1511

Abstract

Purpose

Science policy and practice for open access (OA) books is a rapidly evolving area in the scholarly domain. However, there is much that remains unknown, including how many OA books there are and to what degree they are included in preservation coverage. The purpose of this study is to contribute towards filling this knowledge gap in order to advance both research and practice in the domain of OA books.

Design/methodology/approach

This study utilized open bibliometric data sources to aggregate a harmonized dataset of metadata records for OA books (data sources: the Directory of Open Access Books, OpenAIRE, OpenAlex, Scielo Books, The Lens, and WorldCat). This dataset was then cross-matched based on unique identifiers and book titles to openly available content listings of trusted preservation services (data sources: Cariniana Network, CLOCKSS, Global LOCKSS Network, and Portico). The web domains of the OA books were determined by querying the web addresses or digital object identifiers provided in the metadata of the bibliometric database entries.

Findings

In total, 396,995 unique records were identified from the OA book bibliometric sources, of which 19% were found to be included in at least one of the preservation services. The results suggest reason for concern for the long tail of OA books distributed at thousands of different web domains as these include volatile cloud storage or sometimes no longer contained the files at all.

Research limitations/implications

Data quality issues, varying definitions of OA across services and inconsistent implementation of unique identifiers were discovered as key challenges. The study includes recommendations for publishers, libraries, data providers and preservation services for improving monitoring and practices for OA book preservation.

Originality/value

This study provides methodological and empirical findings for advancing the practices of OA book publishing, preservation and research.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Book part
Publication date: 6 May 2019

Michael Rigby, Grit Kühne and Shalmali Deshpande

Information and communication technologies can transform how services can be and are delivered as has already happened in other arenas, such as civil aviation, financial services…

Abstract

Information and communication technologies can transform how services can be and are delivered as has already happened in other arenas, such as civil aviation, financial services and retailing. Most modern health care is heavily dependent on e-health, including record keeping, targeted information sharing and digital diagnostic and imaging techniques. However, there remains little scientific knowledge base for optimal system content and function in primary health care, particularly for children. Models of Child Health Appraised (MOCHA) aimed to establish the current e-health situation in children’s primary care services. Electronic health records (EHRs) are in regular use in much of northern and western Europe and in some newer European Union Member States, but other countries lag behind. MOCHA investigated the use of unique identifiers, the use of case-based public health EHRs and the capability of record linkage, linkage of information with school health data and monitoring of social media influences, such as health websites and health apps. A widespread lack of standards underlined a lack of research enquiry into this issue in terms of children’s health data and health knowledge. Health websites and apps are a growing area of healthcare delivery, but there is a worrying lack of safeguards in place. The challenge for policy-makers and practitioners is to be aware and to lead on the innovative harnessing of new technologies, while protecting child users against new harms.

Details

Issues and Opportunities in Primary Health Care for Children in Europe
Type: Book
ISBN: 978-1-78973-354-9

Keywords

Content available
Article
Publication date: 1 March 2013

202

Abstract

Details

Library Hi Tech News, vol. 30 no. 1
Type: Research Article
ISSN: 0741-9058

Content available
Article
Publication date: 5 July 2021

Pedro Lafargue, Michael Rogerson, Glenn C. Parry and Joel Allainguillaume

This paper examines the potential of “biomarkers” to provide immutable identification for food products (chocolate), providing traceability and visibility in the supply chain from…

2381

Abstract

Purpose

This paper examines the potential of “biomarkers” to provide immutable identification for food products (chocolate), providing traceability and visibility in the supply chain from retail product back to farm.

Design/methodology/approach

This research uses qualitative data collection, including fieldwork at cocoa farms and chocolate manufacturers in Ecuador and the Netherlands and semi-structured interviews with industry professionals to identify challenges and create a supply chain map from cocoa plant to retailer, validated by area experts. A library of biomarkers is created using DNA collected from fieldwork and the International Cocoa Quarantine Centre, holders of cocoa varieties from known locations around the world. Matching sample biomarkers with those in the library enables identification of origins of cocoa used in a product, even when it comes from multiple different sources and has been processed.

Findings

Supply chain mapping and interviews identify areas of the cocoa supply chain that lack the visibility required for management to guarantee sustainability and quality. A decoupling point, where smaller farms/traders’ goods are combined to create larger economic units, obscures product origins and limits visibility. These factors underpin a potential boundary condition to institutional theory in the industry’s fatalism to environmental and human abuses in the face of rising institutional pressures. Biomarkers reliably identify product origin, including specific farms and (fermentation) processing locations, providing visibility and facilitating control and trust when purchasing cocoa.

Research limitations/implications

The biomarker “meta-barcoding” of cocoa beans used in chocolate manufacturing accurately identifies the farm, production facility or cooperative, where a cocoa product came from. A controlled data set of biomarkers of registered locations is required for audit to link chocolate products to origin.

Practical implications

Where biomarkers can be produced from organic products, they offer a method for closing visibility gaps, enabling responsible sourcing. Labels (QR codes, barcodes, etc.) can be swapped and products tampered with, but biological markers reduce reliance on physical tags, diminishing the potential for fraud. Biomarkers identify product composition, pinpointing specific farm(s) of origin for cocoa in chocolate, allowing targeted audits of suppliers and identifying if cocoa of unknown origin is present. Labour and environmental abuses exist in many supply chains and enabling upstream visibility may help firms address these challenges.

Social implications

By describing a method for firms in cocoa supply chains to scientifically track their cocoa back to the farm level, the research shows that organizations can conduct social audits for child labour and environmental abuses at specific farms proven to be in their supply chains. This provides a method for delivering supply chain visibility (SCV) for firms serious about tackling such problems.

Originality/value

This paper provides one of the very first examples of biomarkers for agricultural SCV. An in-depth study of stakeholders from the cocoa and chocolate industry elucidates problematic areas in cocoa supply chains. Biomarkers provide a unique biological product identifier. Biomarkers can support efforts to address environmental and social sustainability issues such as child labour, modern slavery and deforestation by providing visibility into previously hidden areas of the supply chain.

Details

Supply Chain Management: An International Journal, vol. 27 no. 6
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 20 July 2020

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds…

1847

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 July 2020

Jun Lin, Wen Long, Anting Zhang and Yueting Chai

The blockchain technology provides a way to record transactions that is designed to be highly secure, transparent, trustable, traceable, auditable and tamper-proof. And, the…

3498

Abstract

Purpose

The blockchain technology provides a way to record transactions that is designed to be highly secure, transparent, trustable, traceable, auditable and tamper-proof. And, the internet of things (IoT) technology provides the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction, which is able to link computing devices and digitized machines, things, objects, animals and people that are provided with digital unique identifiers (UIDs). This paper aims to explore the combined application of blockchain and IoT-based technologies, especially on the intellectual property protection area.

Design/methodology/approach

In this paper, the authors propose a high-level architecture design of blockchain and IoT-based intellectual property protection system, which can help to process three types of intellectual property: (1) patents, copyrights, trademarks etc.; (2) industrial design, trade dress, craft works, trade secrets etc.; and (3) plant variety rights, geographical indications, etc.

Findings

Using blockchain peer-to-peer network and IoT devices, the proposed method can help people to establish a trusted, self-organized, open and ecological intellectual property protection system.

Originality/value

To the best of the authors’ knowledge, this is the first work that applied blockchain and IoT technologies on traditional intellectual property protection and trade ecosystem.

Details

International Journal of Crowd Science, vol. 4 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Content available

Abstract

Details

Aslib Journal of Information Management, vol. 75 no. 6
Type: Research Article
ISSN: 2050-3806

1 – 10 of 287