Search results

1 – 10 of over 19000
Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 8 July 2021

Johann Eder and Vladimir A. Shekhovtsov

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or…

1490

Abstract

Purpose

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.

Design/methodology/approach

Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.

Findings

This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.

Originality/value

This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 3 August 2021

Rose Clancy, Dominic O'Sullivan and Ken Bruton

Data-driven quality management systems, brought about by the implementation of digitisation and digital technologies, is an integral part of improving supply chain management…

6076

Abstract

Purpose

Data-driven quality management systems, brought about by the implementation of digitisation and digital technologies, is an integral part of improving supply chain management performance. The purpose of this study is to determine a methodology to aid the implementation of digital technologies and digitisation of the supply chain to enable data-driven quality management and the reduction of waste from manufacturing processes.

Design/methodology/approach

Methodologies from both the quality management and data science disciplines were implemented together to test their effectiveness in digitalising a manufacturing process to improve supply chain management performance. The hybrid digitisation approach to process improvement (HyDAPI) methodology was developed using findings from the industrial use case.

Findings

Upon assessment of the existing methodologies, Six Sigma and CRISP-DM were found to be the most suitable process improvement and data mining methodologies, respectively. The case study revealed gaps in the implementation of both the Six Sigma and CRISP-DM methodologies in relation to digitisation of the manufacturing process.

Practical implications

Valuable practical learnings borne out of the implementation of these methodologies were used to develop the HyDAPI methodology. This methodology offers a pragmatic step by step approach for industrial practitioners to digitally transform their traditional manufacturing processes to enable data-driven quality management and improved supply chain management performance.

Originality/value

This study proposes the HyDAPI methodology that utilises key elements of the Six Sigma DMAIC and the CRISP-DM methodologies along with additions proposed by the author, to aid with the digitisation of manufacturing processes leading to data-driven quality management of operations within the supply chain.

Details

The TQM Journal, vol. 35 no. 1
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 6 September 2022

Rose Clancy, Ken Bruton, Dominic T.J. O’Sullivan and Aidan J. Cloonan

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital…

2660

Abstract

Purpose

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital transformation of their organisations. The purpose of this study is to provide a framework to guide quality practitioners with the implementation of digitalisation in their existing practices.

Design/methodology/approach

A review of literature assessed how quality management and digitalisation have been integrated. Findings from the literature review highlighted the success of the integration of Lean manufacturing with digitalisation. A comprehensive list of Lean Six Sigma tools were then reviewed in terms of their effectiveness and relevance for the hybrid digitisation approach to process improvement (HyDAPI) framework.

Findings

The implementation of the proposed HyDAPI framework in an industrial case study led to increased efficiency, reduction of waste, standardised work, mistake proofing and the ability to root cause non-conformance products.

Research limitations/implications

The activities and tools in the HyDAPI framework are not inclusive of all techniques from Lean Six Sigma.

Practical implications

The HyDAPI framework is a flexible guide for quality practitioners to digitalise key information from manufacturing processes. The framework allows organisations to select the appropriate tools as needed. This is required because of the varying and complex nature of organisation processes and the challenge of adapting to the continually evolving Industry 4.0.

Originality/value

This research proposes the HyDAPI framework as a flexible and adaptable approach for quality management practitioners to implement digitalisation. This was developed because of the gap in research regarding the lack of procedures guiding organisations in their digital transition to Industry 4.0.

Details

International Journal of Lean Six Sigma, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-4166

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Open Access
Article
Publication date: 3 January 2022

Juliana Elisa Raffaghelli and Stefania Manca

Although current research has investigated how open research data (ORD) are published, researchers' behaviour of ORD sharing on academic social networks (ASNs) remains…

2500

Abstract

Purpose

Although current research has investigated how open research data (ORD) are published, researchers' behaviour of ORD sharing on academic social networks (ASNs) remains insufficiently explored. The purpose of this study is to investigate the connections between ORDs publication and social activity to uncover data literacy gaps.

Design/methodology/approach

This work investigates whether the ORDs publication leads to social activity around the ORDs and their linked published articles to uncover data literacy needs. The social activity was characterised as reads and citations, over the basis of a non-invasive approach supporting this preliminary study. The eventual associations between the social activity and the researchers' profile (scientific domain, gender, region, professional position, reputation) and the quality of the ORD published were investigated to complete this picture. A random sample of ORD items extracted from ResearchGate (752 ORDs) was analysed using quantitative techniques, including descriptive statistics, logistic regression and K-means cluster analysis.

Findings

The results highlight three main phenomena: (1) Globally, there is still an underdeveloped social activity around self-archived ORDs in ResearchGate, in terms of reads and citations, regardless of the published ORDs quality; (2) disentangling the moderating effects over social activity around ORD spots traditional dynamics within the “innovative” practice of engaging with data practices; (3) a somewhat similar situation of ResearchGate as ASN to other data platforms and repositories, in terms of social activity around ORD, was detected.

Research limitations/implications

Although the data were collected within a narrow period, the random data collection ensures a representative picture of researchers' practices.

Practical implications

As per the implications, the study sheds light on data literacy requirements to promote social activity around ORD in the context of open science as a desirable frontier of practice.

Originality/value

Researchers data literacy across digital systems is still little understood. Although there are many policies and technological infrastructure providing support, the researchers do not make an in-depth use of them.

Peer review

The peer-review history for this article is available at: https://publons.com/publon/10.1108/OIR-05-2021-0255.

Details

Online Information Review, vol. 47 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2254

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 30 August 2022

Sven Markus and Paul Buijs

This paper aims to contribute to the debate about the value of blockchain for supply chain management by assessing empirical evidence on the relationship between blockchain and…

4673

Abstract

Purpose

This paper aims to contribute to the debate about the value of blockchain for supply chain management by assessing empirical evidence on the relationship between blockchain and supply chain performance.

Design/methodology/approach

The authors conducted a structured review of the academic literature to identify and assess papers providing empirical insight on operational blockchain applications. The authors complement the findings from this review with primary empirical data from 11 interviews with blockchain providers, users and experts involved in four recent projects.

Findings

The paper presents an integrated research framework that illustrates the impact of blockchain on supply chain performance. The findings highlight that blockchain can affect supply chain performance directly – via one of its core technological features – and indirectly via the broader business project through which blockchain technology is implemented.

Practical implications

Insights from this paper should provide managers with a more nuanced understanding of how blockchain technology can be leveraged to address important supply chain management challenges.

Originality/value

Prior research addressing the relationship between blockchain and supply chain performance mostly discusses potential performance effects of blockchain, presents individual blockchain applications and/or provides little explanation for how the core technological features of blockchain affect supply chain performance. This paper systematically assesses the ways in which blockchain can affect supply chain performance. In doing so, it goes beyond the initial hype around blockchain technology while countering some of the more recent critiques.

Details

Supply Chain Management: An International Journal, vol. 27 no. 7
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 12 October 2023

Jiju Antony, Arshia Kaul, Shreeranga Bhat, Michael Sony, Vasundhara Kaul, Maryam Zulfiqar and Olivia McDermott

This study aims to investigate the adoption of Quality 4.0 (Q4.0) and assess the critical failure factors (CFFs) for its implementation and how its failure is measured.

Abstract

Purpose

This study aims to investigate the adoption of Quality 4.0 (Q4.0) and assess the critical failure factors (CFFs) for its implementation and how its failure is measured.

Design/methodology/approach

A qualitative study based on in-depth interviews with quality managers and executives was conducted to establish the CFFs for Q4.0.

Findings

The significant CFFs highlighted were resistance to change and a lack of understanding of the concept of Q4.0. There was also a complete lack of access to or availability of training around Q4.0.

Research limitations/implications

The study enhances the body of literature on Q4.0 and is one of the first research studies to provide insight into the CFFs of Q4.0.

Practical implications

Based on the discussions with experts in the area of quality in various large and small organizations, one can understand the types of Q4.0 initiatives and the CFFs of Q4.0. By identifying the CFFs, one can establish the steps for improvements for organizations worldwide if they want to implement Q4.0 in the future on the competitive global stage.

Originality/value

The concept of Q4.0 is at the very nascent stage, and thus, the CFFs have not been found in the extant literature. As a result, the article aids businesses in understanding possible problems that might derail their Q4.0 activities.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 23 June 2022

Tshepo Arnold Chauke and Mpho Ngoepe

Many organisations, including professional councils, operate manually to ensure document flow to clients and stakeholders. This results in the loss of valuable documentation such…

1046

Abstract

Purpose

Many organisations, including professional councils, operate manually to ensure document flow to clients and stakeholders. This results in the loss of valuable documentation such as certificates and the incurring of costs due to the returning of post to the sender. The purpose of this study was to explore digital transformation of document flow at the South African Council for Social Science Professionals.

Design/methodology/approach

The methodological approach involved qualitative data collected through interviews, observation and document analysis in response to research questions. The study was a participatory action research project that involved collaboration between researchers and study participants in defining and solving the problem through needs assessment exercise. All three phases of participatory action research were followed, namely, the “look phase”: getting to know stakeholders so that the problem is defined on their terms and the problem definition is reflective of the community context; the “think phase”: interpretation and analysis of what was learned in the “look phase” and the “act phase”: planning, implementing, and evaluating, based on information collected and interpreted in the first two phases.

Findings

The study identified various issues relating to poor data quality, high rate of registered postal returns and non-delivery electronic messages that cannot reach all the intended recipients and accumulation of data for decades. In this regard, the study proposes a framework that can be used by SACSSP to update and verify their details on the portal, as well as digital certificates for membership.

Research limitations/implications

Although the proposed framework is tailor-made for the professional council, it is not depended on prescribed technologies due to usage of open standards that can be used by industry and researchers. Therefore, it can be applied in other context where institutions such as universities communicate with many clients via postal or courier services.

Originality/value

The study used participatory action research involving the researchers and the organisation to solve the problem. The study presented a workflow that the council can use to ensure that the documents reach intended recipients. Furthermore, digital transformation of the process will ensure that the registered professionals are able to access their certificates online and can print them when necessary.

Details

Global Knowledge, Memory and Communication, vol. 73 no. 1/2
Type: Research Article
ISSN: 2514-9342

Keywords

1 – 10 of over 19000