Search results

1 – 10 of over 8000
Article
Publication date: 9 September 2024

Aws Al-Okaily, Manaf Al-Okaily and Ai Ping Teoh

Even though the end-user satisfaction construct has gained prominence as a surrogate measure of information systems performance assessment, it has received scant formal treatment…

Abstract

Purpose

Even though the end-user satisfaction construct has gained prominence as a surrogate measure of information systems performance assessment, it has received scant formal treatment and empirical examination in the data analytics systems field. In this respect, this study aims to examine the vital role of user satisfaction as a proxy measure of data analytics system performance in the financial engineering context.

Design/methodology/approach

This study empirically validated the proposed model using primary quantitative data obtained from financial managers, engineers and analysts who are working at Jordanian financial institutions. The quantitative data were tested using partial least squares-based structural equation modeling.

Findings

The quantitative data analysis results identified that technology quality, information quality, knowledge quality and decision quality are key factors that enhance user satisfaction in a data analytics environment with an explained variance of around 69%.

Originality/value

This empirical research has contributed to the discourse regarding the pivotal role of user satisfaction in data analytics performance in the financial engineering context of developing countries such as Jordan, which lays a firm foundation for future research.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 22 July 2024

Manaf Al-Okaily and Aws Al-Okaily

Financial firms are looking for better ways to harness the power of data analytics to improve their decision quality in the financial modeling era. This study aims to explore key…

Abstract

Purpose

Financial firms are looking for better ways to harness the power of data analytics to improve their decision quality in the financial modeling era. This study aims to explore key factors influencing big data analytics-driven financial decision quality which has been given scant attention in the relevant literature.

Design/methodology/approach

The authors empirically examined the interrelations between five factors including technology capability, data capability, information quality, data-driven insights and financial decision quality drawing on quantitative data collected from Jordanian financial firms using a cross-sectional questionnaire survey.

Findings

The SmartPLS analysis outcomes revealed that both technology capability and data capability have a positive and direct influence on information quality and data-driven insights without any direct influence on financial decision quality. The findings also point to the importance and influence of information quality and data-driven insights on high-quality financial decisions.

Originality/value

The study for the first time enriches the knowledge and relevant literature by exploring the critical factors affecting big data-driven financial decision quality in the financial modeling context.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

Open Access
Article
Publication date: 19 June 2024

Armindo Lobo, Paulo Sampaio and Paulo Novais

This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0…

Abstract

Purpose

This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0. It aims to design and implement the framework, compare different machine learning (ML) models and evaluate a non-sampling threshold-moving approach for adjusting prediction capabilities based on product requirements.

Design/methodology/approach

This study applies the Cross-Industry Standard Process for Data Mining (CRISP-DM) and four ML models to predict customer complaints from automotive production tests. It employs cost-sensitive and threshold-moving techniques to address data imbalance, with the F1-Score and Matthews correlation coefficient assessing model performance.

Findings

The framework effectively predicts customer complaint-related tests. XGBoost outperformed the other models with an F1-Score of 72.4% and a Matthews correlation coefficient of 75%. It improves the lot-release process and cost efficiency over heuristic methods.

Practical implications

The framework has been tested on real-world data and shows promising results in improving lot-release decisions and reducing complaints and costs. It enables companies to adjust predictive models by changing only the threshold, eliminating the need for retraining.

Originality/value

To the best of our knowledge, there is limited literature on using ML to predict customer complaints for the lot-release process in an automotive company. Our proposed framework integrates ML with a non-sampling approach, demonstrating its effectiveness in predicting complaints and reducing costs, fostering Quality 4.0.

Details

The TQM Journal, vol. 36 no. 9
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 5 June 2024

Anabela Costa Silva, José Machado and Paulo Sampaio

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine…

Abstract

Purpose

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine learning (ML) and even predictive models emerge as indispensable pillars. Given the relevance of these topics, the present study focused on the analysis of customer complaint data, employing ML techniques to anticipate complaint accountability. The primary objective was to enhance data accessibility, harnessing the potential of ML models to optimize the complaint handling process and thereby positively contribute to data-driven decision-making. This approach aimed not only to reduce the number of units to be analyzed and customer response time but also to underscore the pressing need for a paradigm shift in quality management. The application of AI techniques sought to enhance not only the efficiency of the complaint handling process and data accessibility but also to demonstrate how the integration of these innovative approaches could profoundly transform the way quality is conceived and managed within organizations.

Design/methodology/approach

To conduct this study, real customer complaint data from an automotive company was utilized. Our main objective was to highlight the importance of artificial intelligence (AI) techniques in the context of quality. To achieve this, we adopted a methodology consisting of 10 distinct phases: business analysis and understanding; project plan definition; sample definition; data exploration; data processing and pre-processing; feature selection; acquisition of predictive models; evaluation of the models; presentation of the results; and implementation. This methodology was adapted from data mining methodologies referenced in the literature, taking into account the specific reality of the company under study. This ensured that the obtained results were applicable and replicable across different fields, thereby strengthening the relevance and generalizability of our research findings.

Findings

The achieved results not only demonstrated the ability of ML models to predict complaint accountability with an accuracy of 64%, but also underscored the significance of the adopted approach within the context of Quality 4.0 (Q4.0). This study served as a proof of concept in complaint analysis, enabling process automation and the development of a guide applicable across various areas of the company. The successful integration of AI techniques and Q4.0 principles highlighted the pressing need to apply concepts of digitization and artificial intelligence in quality management. Furthermore, it emphasized the critical importance of data, its organization, analysis and availability in driving digital transformation and enhancing operational efficiency across all company domains. In summary, this work not only showcased the advancements achieved through ML application but also emphasized the pivotal role of data and digitization in the ongoing evolution of Quality 4.0.

Originality/value

This study presents a significant contribution by exploring complaint data within the organization, an area lacking investigation in real-world contexts, particularly focusing on practical applications. The development of standardized processes for data handling and the application of predictions for classification models not only demonstrated the viability of this approach but also provided a valuable proof of concept for the company. Most importantly, this work was designed to be replicable in other areas of the factory, serving as a fundamental basis for the company’s data scientists. Until then, limited data access and lack of automation in its treatment and analysis represented significant challenges. In the context of Quality 4.0, this study highlights not only the immediate advantages for decision-making and predicting complaint outcomes but also the long-term benefits, including clearer and standardized processes, data-driven decision-making and improved analysis time. Thus, this study not only underscores the importance of data and the application of AI techniques in the era of quality but also fills a knowledge gap by providing an innovative and replicable approach to complaint analysis within the organization. In terms of originality, this article stands out for addressing an underexplored area and providing a tangible and applicable solution for the company, highlighting the intrinsic value of aligning quality with AI and digitization.

Details

The TQM Journal, vol. 36 no. 9
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 31 May 2024

Prashanth Madhala, Hongxiu Li and Nina Helander

The information systems (IS) literature has indicated the importance of data analytics capabilities (DAC) in improving business performance in organizations. The literature has…

1432

Abstract

Purpose

The information systems (IS) literature has indicated the importance of data analytics capabilities (DAC) in improving business performance in organizations. The literature has also highlighted the roles of organizations’ data-related resources in developing their DAC and enhancing their business performance. However, little research has taken resource quality into account when studying DAC for business performance enhancement. Therefore, the purpose of this paper is to understand the impact of resource quality on DAC development for business performance enhancement.

Design/methodology/approach

We studied DAC development using the resource-based view and the IS success model based on empirical data collected via 19 semi-structured interviews.

Findings

Our findings show that data-related resource (including data, data systems, and data services) quality is vital to the development of DAC and the enhancement of organizations’ business performance. The study uncovers the factors that make up each quality dimension, which is required for developing DAC for business performance enhancement.

Originality/value

Using the resource quality view, this study contributes to the literature by exploring the role of data-related resource quality in DAC development and business performance enhancement.

Details

Industrial Management & Data Systems, vol. 124 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. 80 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 6 September 2022

Rose Clancy, Ken Bruton, Dominic T.J. O’Sullivan and Aidan J. Cloonan

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital…

3592

Abstract

Purpose

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital transformation of their organisations. The purpose of this study is to provide a framework to guide quality practitioners with the implementation of digitalisation in their existing practices.

Design/methodology/approach

A review of literature assessed how quality management and digitalisation have been integrated. Findings from the literature review highlighted the success of the integration of Lean manufacturing with digitalisation. A comprehensive list of Lean Six Sigma tools were then reviewed in terms of their effectiveness and relevance for the hybrid digitisation approach to process improvement (HyDAPI) framework.

Findings

The implementation of the proposed HyDAPI framework in an industrial case study led to increased efficiency, reduction of waste, standardised work, mistake proofing and the ability to root cause non-conformance products.

Research limitations/implications

The activities and tools in the HyDAPI framework are not inclusive of all techniques from Lean Six Sigma.

Practical implications

The HyDAPI framework is a flexible guide for quality practitioners to digitalise key information from manufacturing processes. The framework allows organisations to select the appropriate tools as needed. This is required because of the varying and complex nature of organisation processes and the challenge of adapting to the continually evolving Industry 4.0.

Originality/value

This research proposes the HyDAPI framework as a flexible and adaptable approach for quality management practitioners to implement digitalisation. This was developed because of the gap in research regarding the lack of procedures guiding organisations in their digital transition to Industry 4.0.

Details

International Journal of Lean Six Sigma, vol. 15 no. 5
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 19 September 2024

Philipp Loacker, Siegfried Pöchtrager, Christian Fikar and Wolfgang Grenzfurtner

The purpose of this study is to present a methodical procedure on how to prepare event logs and analyse them through process mining, statistics and visualisations. The aim is to…

Abstract

Purpose

The purpose of this study is to present a methodical procedure on how to prepare event logs and analyse them through process mining, statistics and visualisations. The aim is to derive roots and patterns of quality deviations and non-conforming finished products as well as best practice facilitating employee training in the food processing industry. Thereby, a key focus is on recognising tacit knowledge hidden in event logs to improve quality processes.

Design/methodology/approach

This study applied process mining to detect root causes of quality deviations in operational process of food production. In addition, a data-ecosystem was developed which illustrates a continuous improvement feedback loop and serves as a role model for other applications in the food processing industry. The approach was applied to a real-case study in the processed cheese industry.

Findings

The findings revealed practical and conceptional contributions which can be used to continuously improve quality management (QM) in food processing. Thereby, the developed data-ecosystem supports production and QM in the decision-making processes. The findings of the analysis are a valuable basis to enhance operational processes, aiming to prevent quality deviations and non-conforming finished products.

Originality/value

Process mining is still rarely used in the food industry. Thereby, the proposed method helps to identify tacit knowledge in the food processing industry, which was shown by the framework for the preparation of event logs and the data ecosystem.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 17 September 2024

Saeed Rouhani, Saba Alsadat Bozorgi, Hannan Amoozad Mahdiraji and Demetris Vrontis

This study addresses the gap in understanding text analytics within the service domain, focusing on new service development to provide insights into key research themes and trends…

Abstract

Purpose

This study addresses the gap in understanding text analytics within the service domain, focusing on new service development to provide insights into key research themes and trends in text analytics approaches to service development. It explores the benefits and challenges of implementing these approaches and identifies potential research opportunities for future service development. Importantly, this study offers insights to assist service providers to make data-driven decisions for developing new services and optimising existing ones.

Design/methodology/approach

This research introduces the hybrid thematic analysis with a systematic literature review (SLR-TA). It delves into the various aspects of text analytics in service development by analysing 124 research papers published from 2012 to 2023. This approach not only identifies key practical applications but also evaluates the benefits and difficulties of applying text analytics in this domain, thereby ensuring the reliability and validity of the findings.

Findings

The study highlights an increasing focus on text analytics within the service industry over the examined period. Using the SLR-TA approach, it identifies eight themes in previous studies and finds that “Service Quality” had the most research interest, comprising 42% of studies, while there was less emphasis on designing new services. The study categorises research into four types: Case, Concept, Tools and Implementation, with case studies comprising 68% of the total.

Originality/value

This study is groundbreaking in conducting a thorough and systematic analysis of a broad collection of articles. It provides a comprehensive view of text analytics approaches in the service sector, particularly in developing new services and service innovation. This study lays out distinct guidelines for future research and offers valuable insights to foster research recommendations.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 30 August 2024

Janet Chang, Xiang Xie and Ajith Kumar Parlikad

This research investigates the capabilities of Cloud-based Building Information Modelling (CBIM) in managing quality asset information, drawing upon software engineers'…

Abstract

Purpose

This research investigates the capabilities of Cloud-based Building Information Modelling (CBIM) in managing quality asset information, drawing upon software engineers' perspectives. Compelling statistics highlight the relationship between building information and environmental sustainability. However, despite the growing utilisation of CBIM in the Architecture, Engineering and Construction (AEC) industry, a significant knowledge gap remains concerning its effectiveness in maintaining quality asset information.

Design/methodology/approach

This study employed an exploratory qualitative approach, utilising semi-structured interviews with thirteen software engineers actively developing technological solutions for the AEC industry. Following thematic analysis, the findings are categorised into four dimensions: strengths, weaknesses, opportunities and technological limitations. Subsequently, these findings are analysed in relation to previously identified information quality problems.

Findings

This research reveals that while CBIM improves project coordination and information accessibility, its effectiveness is challenged by the need for manual updates, vulnerability to human errors and dependency on network services. Technological limitations, notably the absence of automated updates for as-built drawings and the risk of data loss during file conversions in the design phase, coupled with its reduced capability to validate context-specific information from the user's viewpoint, emphasise the urgent need for managerial strategies to maximise CBIM's capabilities in addressing information quality problems.

Originality/value

This study augments the understanding of CBIM, highlighting the managerial implications of a robust information management process to safeguard information integrity. This approach fosters sustainable practices anchored in reliable information essential for achieving desired outcomes. The findings also have broader managerial implications, especially for sectors that employ CBIM as an instrumental tool.

Details

Built Environment Project and Asset Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-124X

Keywords

1 – 10 of over 8000