Search results

1 – 10 of over 2000
Article
Publication date: 25 June 2021

Tran Khanh Dang and Thu Anh Duong

In the open data context, the shared data could come through many transformation processes, originating from many sources, which exposes the risk of non-authentic data. Moreover…

Abstract

Purpose

In the open data context, the shared data could come through many transformation processes, originating from many sources, which exposes the risk of non-authentic data. Moreover, each data set has different properties, shared under various licenses, which means the updated data could change its characteristics and related policies. This paper aims to introduce an effective and elastic solution to keep track of data changes and manage their characteristics within the open data platform. These changes have to be immutable to avoid violated modification and could be used as the certified provenance to improve the quality of data.

Design/methodology/approach

This paper will propose a pragmatic solution that focuses on the combination of comprehensive knowledge archive network – the broadest used open data platform and hyperledger fabric blockchain to ensure all the changes are immutable and transparent. As using smart contracts plus a standard provenance data format, all processes are running automatically and could be extended to integrate with other provenance systems and so the introduced solution is quite flexible to be used in different open data ecosystems and real-world application domains.

Findings

The research involves some related studies about the provenance system. This study finds out that most of the studies are focused on the commercial sector or applicable to a specific domain and not relevant for the open-data section. To show that the proposed solution is a logical and feasible direction, this paper conducts an experimental sample to validate the result. The testing model is running successfully with an elastic system architect and promising overall performance.

Originality/value

Open data is the future of many businesses but still does not receive enough attention from the research community. The paper contributes a novel approach to protect the provenance of open data.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 March 2024

Elena Isabel Vazquez Melendez, Paul Bergey and Brett Smith

This study aims to examine the blockchain landscape in supply chain management by drawing insights from academic and industry literature. It identifies the key drivers…

375

Abstract

Purpose

This study aims to examine the blockchain landscape in supply chain management by drawing insights from academic and industry literature. It identifies the key drivers, categorizes the products involved and highlights the business values achieved by early adopters of blockchain technology within the supply chain domain. Additionally, it explores fingerprinting techniques to establish a robust connection between physical products and the blockchain ledger.

Design/methodology/approach

The authors combined the interpretive sensemaking systematic literature review to offer insights into how organizations interpreted their business challenges and adopted blockchain technology in their specific supply chain context; content analysis (using Leximancer automated text mining software) for concept mapping visualization, facilitating the identification of key themes, trends and relationships, and qualitative thematic analysis (NVivo) for data organization, coding and enhancing the depth and efficiency of analysis.

Findings

The findings highlight the transformative potential of blockchain technology and offer valuable insights into its implementation in optimizing supply chain operations. Furthermore, it emphasizes the importance of product provenance information to consumers, with blockchain technology offering certainty and increasing customer loyalty toward brands that prioritize transparency.

Research limitations/implications

This research has several limitations that should be acknowledged. First, there is a possibility that some relevant investigations may have been missed or omitted, which could impact the findings. In addition, the limited availability of literature on blockchain adoption in supply chains may restrict the scope of the conclusions. The evolving nature of blockchain adoption in supply chains also poses a limitation. As the technology is in its infancy, the authors expect that a rapidly emerging body of literature will provide more extensive evidence-based general conclusions in the future. Another limitation is the lack of information contrasting academic and industry research, which could have provided more balanced insights into the technology’s advancement. The authors attributed this limitation to the narrow collaborations between academia and industry in the field of blockchain for supply chain management.

Practical implications

Practitioners recognize the potential of blockchain in addressing industry-specific challenges, such as ensuring transparency and data provenance. Understanding the benefits achieved by early adopters can serve as a starting point for companies considering blockchain adoption. Blockchain technology can verify product origin, enable truthful certifications and comply with established standards, reinforcing trust among stakeholders and customers. Thus, implementing blockchain solutions can enhance brand reputation and consumer confidence by ensuring product authenticity and quality. Based on the results, companies can align their strategies and initiatives with their needs and expectations.

Social implications

In essence, the integration of blockchain technology within supply chain provenance initiatives not only influences economic aspects but also brings substantial social impacts by reinforcing consumer trust, encouraging sustainable and ethical practices, combating product counterfeiting, empowering stakeholders and contributing to a more responsible, transparent and progressive socioeconomic environment.

Originality/value

This study consolidates current knowledge on blockchain’s capacity and identifies the specific drivers and business values associated with early blockchain adoption in supply chain provenance. Furthermore, it underscores the critical role of product fingerprinting techniques in supporting blockchain for supply chain provenance, facilitating more robust and efficient supply chain operations.

Details

Supply Chain Management: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 17 November 2023

Peiman Tavakoli, Ibrahim Yitmen, Habib Sadri and Afshin Taheri

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin…

Abstract

Purpose

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin smart and sustainable built environment (DT) for predictive asset management (PAM) in building facilities.

Design/methodology/approach

Qualitative research data were collected through a comprehensive scoping review of secondary sources. Additionally, primary data were gathered through interviews with industry specialists. The analysis of the data served as the basis for developing blockchain-based DT data provenance models and scenarios. A case study involving a conference room in an office building in Stockholm was conducted to assess the proposed data provenance model. The implementation utilized the Remix Ethereum platform and Sepolia testnet.

Findings

Based on the analysis of results, a data provenance model on blockchain-based DT which ensures the reliability and trustworthiness of data used in PAM processes was developed. This was achieved by providing a transparent and immutable record of data origin, ownership and lineage.

Practical implications

The proposed model enables decentralized applications (DApps) to publish real-time data obtained from dynamic operations and maintenance processes, enhancing the reliability and effectiveness of data for PAM.

Originality/value

The research presents a data provenance model on a blockchain-based DT, specifically tailored to PAM in building facilities. The proposed model enhances decision-making processes related to PAM by ensuring data reliability and trustworthiness and providing valuable insights for specialists and stakeholders interested in the application of blockchain technology in asset management and data provenance.

Details

Smart and Sustainable Built Environment, vol. 13 no. 1
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 8 January 2018

Chunqiu Li and Shigeo Sugimoto

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…

1271

Abstract

Purpose

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.

Design/methodology/approach

The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.

Findings

Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.

Research limitations/implications

The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.

Originality/value

This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.

Article
Publication date: 5 August 2014

Kamran Munir, Saad Liaquat Kiani, Khawar Hasham, Richard McClatchey, Andrew Branson and Jetendr Shamdasani

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the…

Abstract

Purpose

The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the integrated neuroscience data and to enable the analyses demanded by the biomedical research community.

Design/methodology/approach

The design and development of the N4U analysis base and related information services addresses the existing research and practical challenges by offering an integrated medical data analysis environment with the necessary building blocks for neuroscientists to optimally exploit neuroscience workflows, large image data sets and algorithms to conduct analyses.

Findings

The provision of an integrated e-science environment of computational neuroimaging can enhance the prospects, speed and utility of the data analysis process for neurodegenerative diseases.

Originality/value

The N4U analysis base enables conducting biomedical data analyses by indexing and interlinking the neuroimaging and clinical study data sets stored on the grid infrastructure, algorithms and scientific workflow definitions along with their associated provenance information.

Details

Journal of Systems and Information Technology, vol. 16 no. 3
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 2 September 2013

Ujjal Marjit, Kumar Sharma, Arup Sarkar and Madaiah Krishnamurthy

– This article aims to discuss how the emergence of advanced semantic web technology has transformed the conventional web into machine processable and understandable form.

1137

Abstract

Purpose

This article aims to discuss how the emergence of advanced semantic web technology has transformed the conventional web into machine processable and understandable form.

Design/methodology/approach

In this paper the authors survey the current research works, tools and applications on publishing legacy data as linked data with the aspiration of conferring healthier understanding of the working domain of the linked data world.

Findings

Today, a vast amount of data are stored in various file formats other than RDF, which are called legacy data. In order to publish them as linked data they need to be extracted and converted into RDF or linked data without altering the original data schema or loss of information.

Originality/value

Most of the key issues have to be addressed. A more sophisticated approach to this technology is the linked data, which constructs the transformation of web of documents into the web of connected data possible.

Details

Library Hi Tech, vol. 31 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 29 July 2019

Ixchel M. Faniel, Rebecca D. Frank and Elizabeth Yakel

Taking the researchers’ perspective, the purpose of this paper is to examine the types of context information needed to preserve data’s meaning in ways that support data reuse.

Abstract

Purpose

Taking the researchers’ perspective, the purpose of this paper is to examine the types of context information needed to preserve data’s meaning in ways that support data reuse.

Design/methodology/approach

This paper is based on a qualitative study of 105 researchers from three disciplinary communities: quantitative social science, archaeology and zoology. The study focused on researchers’ most recent data reuse experience, particularly what they needed when deciding whether to reuse data.

Findings

Findings show that researchers mentioned 12 types of context information across three broad categories: data production information (data collection, specimen and artifact, data producer, data analysis, missing data, and research objectives); repository information (provenance, reputation and history, curation and digitization); and data reuse information (prior reuse, advice on reuse and terms of use).

Originality/value

This paper extends digital curation conversations to include the preservation of context as well as content to facilitate data reuse. When compared to prior research, findings show that there is some generalizability with respect to the types of context needed across different disciplines and data sharing and reuse environments. It also introduces several new context types. Relying on the perspective of researchers offers a more nuanced view that shows the importance of the different context types for each discipline and the ways disciplinary members thought about them. Both data producers and curators can benefit from knowing what to capture and manage during data collection and deposit into a repository.

Details

Journal of Documentation, vol. 75 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Book part
Publication date: 12 July 2023

Fiona Rose Greenland and Michelle D. Fabiani

Satellite images can be a powerful source of data for analyses of conflict dynamics and social movements, but sociology has been slow to develop methods and metadata standards for…

Abstract

Satellite images can be a powerful source of data for analyses of conflict dynamics and social movements, but sociology has been slow to develop methods and metadata standards for transforming those images into data. We ask: How can satellite images become useful data? What are the key methodological and ethical considerations for incorporating high-resolution satellite images into conflict research? Why are metadata important in this work? We begin with a review of recent developments in satellite-based social scientific work on conflict, then discuss the technical and epistemological issues raised by machine processing of satellite information into user-ready images. We argue that high-resolution images can be useful analytical tools provided they are used with full awareness of their ethical and technical parameters. To support our analysis, we draw on two novel studies of satellite data research practices during the Syrian war. We conclude with a discussion of specific methodological procedures tried and tested in our ongoing work.

Details

Methodological Advances in Research on Social Movements, Conflict, and Change
Type: Book
ISBN: 978-1-80117-887-7

Keywords

Article
Publication date: 17 August 2018

Miguel-Angel Sicilia and Anna Visvizi

The purpose of this paper is to employ the case of Organization for Economic Cooperation and Development (OECD) data repositories to examine the potential of blockchain technology…

3153

Abstract

Purpose

The purpose of this paper is to employ the case of Organization for Economic Cooperation and Development (OECD) data repositories to examine the potential of blockchain technology in the context of addressing basic contemporary societal concerns, such as transparency, accountability and trust in the policymaking process. Current approaches to sharing data employ standardized metadata, in which the provider of the service is assumed to be a trusted party. However, derived data, analytic processes or links from policies, are in many cases not shared in the same form, thus breaking the provenance trace and making the repetition of analysis conducted in the past difficult. Similarly, it becomes tricky to test whether certain conditions justifying policies implemented still apply. A higher level of reuse would require a decentralized approach to sharing both data and analytic scripts and software. This could be supported by a combination of blockchain and decentralized file system technology.

Design/methodology/approach

The findings presented in this paper have been derived from an analysis of a case study, i.e., analytics using data made available by the OECD. The set of data the OECD provides is vast and is used broadly. The argument is structured as follows. First, current issues and topics shaping the debate on blockchain are outlined. Then, a redefinition of the main artifacts on which some simple or convoluted analytic results are based is revised for some concrete purposes. The requirements on provenance, trust and repeatability are discussed with regards to the architecture proposed, and a proof of concept using smart contracts is used for reasoning on relevant scenarios.

Findings

A combination of decentralized file systems and an open blockchain such as Ethereum supporting smart contracts can ascertain that the set of artifacts used for the analytics is shared. This enables the sequence underlying the successive stages of research and/or policymaking to be preserved. This suggests that, in turn, and ex post, it becomes possible to test whether evidence supporting certain findings and/or policy decisions still hold. Moreover, unlike traditional databases, blockchain technology makes it possible that immutable records can be stored. This means that the artifacts can be used for further exploitation or repetition of results. In practical terms, the use of blockchain technology creates the opportunity to enhance the evidence-based approach to policy design and policy recommendations that the OECD fosters. That is, it might enable the stakeholders not only to use the data available in the OECD repositories but also to assess corrections to a given policy strategy or modify its scope.

Research limitations/implications

Blockchains and related technologies are still maturing, and several questions related to their use and potential remain underexplored. Several issues require particular consideration in future research, including anonymity, scalability and stability of the data repository. This research took as example OECD data repositories, precisely to make the point that more research and more dialogue between the research and policymaking community is needed to embrace the challenges and opportunities blockchain technology generates. Several questions that this research prompts have not been addressed. For instance, the question of how the sharing economy concept for the specifics of the case could be employed in the context of blockchain has not been dealt with.

Practical implications

The practical implications of the research presented here can be summarized in two ways. On the one hand, by suggesting how a combination of decentralized file systems and an open blockchain, such as Ethereum supporting smart contracts, can ascertain that artifacts are shared, this paper paves the way toward a discussion on how to make this approach and solution reality. The approach and architecture proposed in this paper would provide a way to increase the scope of the reuse of statistical data and results and thus would improve the effectiveness of decision making as well as the transparency of the evidence supporting policy.

Social implications

Decentralizing analytic artifacts will add to existing open data practices an additional layer of benefits for different actors, including but not limited to policymakers, journalists, analysts and/or researchers without the need to establish centrally managed institutions. Moreover, due to the degree of decentralization and absence of a single-entry point, the vulnerability of data repositories to cyberthreats might be reduced. Simultaneously, by ensuring that artifacts derived from data based in those distributed depositories are made immutable therein, full reproducibility of conclusions concerning the data is possible. In the field of data-driven policymaking processes, it might allow policymakers to devise more accurate ways of addressing pressing issues and challenges.

Originality/value

This paper offers the first blueprint of a form of sharing that complements open data practices with the decentralized approach of blockchain and decentralized file systems. The case of OECD data repositories is used to highlight that while data storing is important, the real added value of blockchain technology rests in the possible change on how we use the data and data sets in the repositories. It would eventually enable a more transparent and actionable approach to linking policy up with the supporting evidence. From a different angle, throughout the paper the case is made that rather than simply data, artifacts from conducted analyses should be made persistent in a blockchain. What is at stake is the full reproducibility of conclusions based on a given set of data, coupled with the possibility of ex post testing the validity of the assumptions and evidence underlying those conclusions.

Details

Library Hi Tech, vol. 37 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 2000