Search results

1 – 10 of over 10000
Open Access
Article
Publication date: 6 September 2022

Rose Clancy, Ken Bruton, Dominic T.J. O’Sullivan and Aidan J. Cloonan

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital…

3600

Abstract

Purpose

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital transformation of their organisations. The purpose of this study is to provide a framework to guide quality practitioners with the implementation of digitalisation in their existing practices.

Design/methodology/approach

A review of literature assessed how quality management and digitalisation have been integrated. Findings from the literature review highlighted the success of the integration of Lean manufacturing with digitalisation. A comprehensive list of Lean Six Sigma tools were then reviewed in terms of their effectiveness and relevance for the hybrid digitisation approach to process improvement (HyDAPI) framework.

Findings

The implementation of the proposed HyDAPI framework in an industrial case study led to increased efficiency, reduction of waste, standardised work, mistake proofing and the ability to root cause non-conformance products.

Research limitations/implications

The activities and tools in the HyDAPI framework are not inclusive of all techniques from Lean Six Sigma.

Practical implications

The HyDAPI framework is a flexible guide for quality practitioners to digitalise key information from manufacturing processes. The framework allows organisations to select the appropriate tools as needed. This is required because of the varying and complex nature of organisation processes and the challenge of adapting to the continually evolving Industry 4.0.

Originality/value

This research proposes the HyDAPI framework as a flexible and adaptable approach for quality management practitioners to implement digitalisation. This was developed because of the gap in research regarding the lack of procedures guiding organisations in their digital transition to Industry 4.0.

Details

International Journal of Lean Six Sigma, vol. 15 no. 5
Type: Research Article
ISSN: 2040-4166

Keywords

Open Access
Article
Publication date: 3 August 2021

Rose Clancy, Dominic O'Sullivan and Ken Bruton

Data-driven quality management systems, brought about by the implementation of digitisation and digital technologies, is an integral part of improving supply chain management

8306

Abstract

Purpose

Data-driven quality management systems, brought about by the implementation of digitisation and digital technologies, is an integral part of improving supply chain management performance. The purpose of this study is to determine a methodology to aid the implementation of digital technologies and digitisation of the supply chain to enable data-driven quality management and the reduction of waste from manufacturing processes.

Design/methodology/approach

Methodologies from both the quality management and data science disciplines were implemented together to test their effectiveness in digitalising a manufacturing process to improve supply chain management performance. The hybrid digitisation approach to process improvement (HyDAPI) methodology was developed using findings from the industrial use case.

Findings

Upon assessment of the existing methodologies, Six Sigma and CRISP-DM were found to be the most suitable process improvement and data mining methodologies, respectively. The case study revealed gaps in the implementation of both the Six Sigma and CRISP-DM methodologies in relation to digitisation of the manufacturing process.

Practical implications

Valuable practical learnings borne out of the implementation of these methodologies were used to develop the HyDAPI methodology. This methodology offers a pragmatic step by step approach for industrial practitioners to digitally transform their traditional manufacturing processes to enable data-driven quality management and improved supply chain management performance.

Originality/value

This study proposes the HyDAPI methodology that utilises key elements of the Six Sigma DMAIC and the CRISP-DM methodologies along with additions proposed by the author, to aid with the digitisation of manufacturing processes leading to data-driven quality management of operations within the supply chain.

Details

The TQM Journal, vol. 35 no. 1
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 29 November 2017

Chiehyeon Lim, Min-Jun Kim, Ki-Hun Kim, Kwang-Jae Kim and Paul P. Maglio

The proliferation of (big) data provides numerous opportunities for service advances in practice, yet research on using data to advance service is at a nascent stage in the…

8714

Abstract

Purpose

The proliferation of (big) data provides numerous opportunities for service advances in practice, yet research on using data to advance service is at a nascent stage in the literature. Many studies have discussed phenomenological benefits of data to service. However, limited research describes managerial issues behind such benefits, although a holistic understanding of the issues is essential in using data to advance service in practice and provides a basis for future research. The purpose of this paper is to address this research gap.

Design/methodology/approach

“Using data to advance service” is about change in organizations. Thus, this study uses action research methods of creating real change in organizations together with practitioners, thereby adding to scientific knowledge about practice. The authors participated in five service design projects with industry and government that used different data sets to design new services.

Findings

Drawing on lessons learned from the five projects, this study empirically identifies 11 managerial issues that should be considered in data-use for advancing service. In addition, by integrating the issues and relevant literature, this study offers theoretical implications for future research.

Originality/value

“Using data to advance service” is a research topic that emerged originally from practice. Action research or case studies on this topic are valuable in understanding practice and in identifying research priorities by discovering the gap between theory and practice. This study used action research over many years to observe real-world challenges and to make academic research relevant to the challenges. The authors believe that the empirical findings will help improve service practices of data-use and stimulate future research.

Details

Journal of Service Theory and Practice, vol. 28 no. 1
Type: Research Article
ISSN: 2055-6225

Keywords

Open Access
Article
Publication date: 12 October 2023

Jiju Antony, Arshia Kaul, Shreeranga Bhat, Michael Sony, Vasundhara Kaul, Maryam Zulfiqar and Olivia McDermott

This study aims to investigate the adoption of Quality 4.0 (Q4.0) and assess the critical failure factors (CFFs) for its implementation and how its failure is measured.

1355

Abstract

Purpose

This study aims to investigate the adoption of Quality 4.0 (Q4.0) and assess the critical failure factors (CFFs) for its implementation and how its failure is measured.

Design/methodology/approach

A qualitative study based on in-depth interviews with quality managers and executives was conducted to establish the CFFs for Q4.0.

Findings

The significant CFFs highlighted were resistance to change and a lack of understanding of the concept of Q4.0. There was also a complete lack of access to or availability of training around Q4.0.

Research limitations/implications

The study enhances the body of literature on Q4.0 and is one of the first research studies to provide insight into the CFFs of Q4.0.

Practical implications

Based on the discussions with experts in the area of quality in various large and small organizations, one can understand the types of Q4.0 initiatives and the CFFs of Q4.0. By identifying the CFFs, one can establish the steps for improvements for organizations worldwide if they want to implement Q4.0 in the future on the competitive global stage.

Originality/value

The concept of Q4.0 is at the very nascent stage, and thus, the CFFs have not been found in the extant literature. As a result, the article aids businesses in understanding possible problems that might derail their Q4.0 activities.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 10 August 2018

Paul Brous, Marijn Janssen and Paulien Herder

Managers are increasingly looking to adopt the Internet of Things (IoT) to include the vast amount of big data generated in their decision-making processes. The use of IoT might…

8953

Abstract

Purpose

Managers are increasingly looking to adopt the Internet of Things (IoT) to include the vast amount of big data generated in their decision-making processes. The use of IoT might yield many benefits for organizations engaged in civil infrastructure management, but these benefits might be difficult to realize as organizations are not equipped to handle and interpret this data. The purpose of this paper is to understand how IoT adoption affects decision-making processes.

Design/methodology/approach

In this paper the changes in the business processes for managing civil infrastructure assets brought about by IoT adoption are analyzed by investigating two case studies within the water management domain. Propositions for effective IoT adoption in decision-making processes are derived.

Findings

The results show that decision processes in civil infrastructure asset management have been transformed to deal with the real-time nature of the data. The authors found the need to make organizational and business process changes, development of new capabilities, data provenance and governance and the need for standardization. IoT can have a transformative effect on business processes.

Research limitations/implications

Because of the chosen research approach, the research results may lack generalizability. Therefore, researchers are encouraged to test the propositions further.

Practical implications

The paper shows that data provenance is necessary to be able to understand the value and the quality of the data often generated by various organizations. Managers need to adapt new capabilities to be able to interpret the data.

Originality/value

This paper fulfills an identified need to understand how IoT adoption affects decision-making processes in asset management in order to be able to achieve expected benefits and mitigate risk.

Details

Business Process Management Journal, vol. 25 no. 3
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 8 July 2021

Johann Eder and Vladimir A. Shekhovtsov

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or…

1780

Abstract

Purpose

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.

Design/methodology/approach

Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.

Findings

This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.

Originality/value

This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 11 July 2023

Hanlie Baudin and Patrick Mapulanga

This paper aims to assess whether the current eResearch Knowledge Centre’s (eRKC) research support practices align with researchers’ requirements for achieving their research…

Abstract

Purpose

This paper aims to assess whether the current eResearch Knowledge Centre’s (eRKC) research support practices align with researchers’ requirements for achieving their research objectives. The study’s objectives were to assess the current eRKC research support services and to determine which are adequate and which are not in supporting the Human Sciences Research Council (HSRC) researchers.

Design/methodology/approach

This study uses interviews as part of the qualitative approach. The researcher chose to use interviews, as some aspects warranted further explanation during the interview. The interviews were scheduled using Zoom’s scheduling assistant. The interviews were semi-structured, guided by a flexible interview procedure and supplemented by follow-up questions, probes and comments. The research life cycle questions guided the interviews. The data obtained were coded and transcribed using MS Excel. The interview data were analysed, using NVivo, according to the themes identified in the research questions and aligned with the theory behind the study. Pre-determined codes were created in line with the six stages of the research life cycle and applied to group the data and extract meaning from each category. Interviewee responses were assigned to groups in line with the stages of the research life cycle.

Findings

The current eRKC research support services are aligned with the needs of HSRC researchers and highlight services that could be expanded or promoted more effectively to HSRC researchers. It proposes a new service, data analysis, and suggests that the eRKC could play a more prominent role in research impact, research data management and fostering collaboration with HSRC research divisions.

Research limitations/implications

This study is limited to assessing the eRKC’s support practices at the HSRC in Pretoria, South Africa. A more comprehensive study is needed for HSRC research services, capabilities and capacity.

Practical implications

Assessment of eRKC followed a comprehensive interviewee schedule that followed Raju and Schoombee’s research life cycle model.

Social implications

Zoom’s scheduling assistant may have generated Zoom fatigue and reduced productivity. Technical issues, losing time, communication gaps and distant time zones may have affected face-to-face interaction.

Originality/value

eRKC research support practices are rare in South Africa and most parts of the world. This study bridges the gap between theory and practice in assessing eRKC research support practices.

Details

Digital Library Perspectives, vol. 39 no. 4
Type: Research Article
ISSN: 2059-5816

Keywords

Open Access
Article
Publication date: 23 June 2022

Tshepo Arnold Chauke and Mpho Ngoepe

Many organisations, including professional councils, operate manually to ensure document flow to clients and stakeholders. This results in the loss of valuable documentation such…

1282

Abstract

Purpose

Many organisations, including professional councils, operate manually to ensure document flow to clients and stakeholders. This results in the loss of valuable documentation such as certificates and the incurring of costs due to the returning of post to the sender. The purpose of this study was to explore digital transformation of document flow at the South African Council for Social Science Professionals.

Design/methodology/approach

The methodological approach involved qualitative data collected through interviews, observation and document analysis in response to research questions. The study was a participatory action research project that involved collaboration between researchers and study participants in defining and solving the problem through needs assessment exercise. All three phases of participatory action research were followed, namely, the “look phase”: getting to know stakeholders so that the problem is defined on their terms and the problem definition is reflective of the community context; the “think phase”: interpretation and analysis of what was learned in the “look phase” and the “act phase”: planning, implementing, and evaluating, based on information collected and interpreted in the first two phases.

Findings

The study identified various issues relating to poor data quality, high rate of registered postal returns and non-delivery electronic messages that cannot reach all the intended recipients and accumulation of data for decades. In this regard, the study proposes a framework that can be used by SACSSP to update and verify their details on the portal, as well as digital certificates for membership.

Research limitations/implications

Although the proposed framework is tailor-made for the professional council, it is not depended on prescribed technologies due to usage of open standards that can be used by industry and researchers. Therefore, it can be applied in other context where institutions such as universities communicate with many clients via postal or courier services.

Originality/value

The study used participatory action research involving the researchers and the organisation to solve the problem. The study presented a workflow that the council can use to ensure that the documents reach intended recipients. Furthermore, digital transformation of the process will ensure that the registered professionals are able to access their certificates online and can print them when necessary.

Details

Global Knowledge, Memory and Communication, vol. 73 no. 1/2
Type: Research Article
ISSN: 2514-9342

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

1 – 10 of over 10000