Search results

1 – 10 of over 26000
Article
Publication date: 9 September 2024

Aws Al-Okaily, Manaf Al-Okaily and Ai Ping Teoh

Even though the end-user satisfaction construct has gained prominence as a surrogate measure of information systems performance assessment, it has received scant formal treatment…

Abstract

Purpose

Even though the end-user satisfaction construct has gained prominence as a surrogate measure of information systems performance assessment, it has received scant formal treatment and empirical examination in the data analytics systems field. In this respect, this study aims to examine the vital role of user satisfaction as a proxy measure of data analytics system performance in the financial engineering context.

Design/methodology/approach

This study empirically validated the proposed model using primary quantitative data obtained from financial managers, engineers and analysts who are working at Jordanian financial institutions. The quantitative data were tested using partial least squares-based structural equation modeling.

Findings

The quantitative data analysis results identified that technology quality, information quality, knowledge quality and decision quality are key factors that enhance user satisfaction in a data analytics environment with an explained variance of around 69%.

Originality/value

This empirical research has contributed to the discourse regarding the pivotal role of user satisfaction in data analytics performance in the financial engineering context of developing countries such as Jordan, which lays a firm foundation for future research.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 22 July 2024

Manaf Al-Okaily and Aws Al-Okaily

Financial firms are looking for better ways to harness the power of data analytics to improve their decision quality in the financial modeling era. This study aims to explore key…

Abstract

Purpose

Financial firms are looking for better ways to harness the power of data analytics to improve their decision quality in the financial modeling era. This study aims to explore key factors influencing big data analytics-driven financial decision quality which has been given scant attention in the relevant literature.

Design/methodology/approach

The authors empirically examined the interrelations between five factors including technology capability, data capability, information quality, data-driven insights and financial decision quality drawing on quantitative data collected from Jordanian financial firms using a cross-sectional questionnaire survey.

Findings

The SmartPLS analysis outcomes revealed that both technology capability and data capability have a positive and direct influence on information quality and data-driven insights without any direct influence on financial decision quality. The findings also point to the importance and influence of information quality and data-driven insights on high-quality financial decisions.

Originality/value

The study for the first time enriches the knowledge and relevant literature by exploring the critical factors affecting big data-driven financial decision quality in the financial modeling context.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

Open Access
Article
Publication date: 19 June 2024

Armindo Lobo, Paulo Sampaio and Paulo Novais

This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0…

Abstract

Purpose

This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0. It aims to design and implement the framework, compare different machine learning (ML) models and evaluate a non-sampling threshold-moving approach for adjusting prediction capabilities based on product requirements.

Design/methodology/approach

This study applies the Cross-Industry Standard Process for Data Mining (CRISP-DM) and four ML models to predict customer complaints from automotive production tests. It employs cost-sensitive and threshold-moving techniques to address data imbalance, with the F1-Score and Matthews correlation coefficient assessing model performance.

Findings

The framework effectively predicts customer complaint-related tests. XGBoost outperformed the other models with an F1-Score of 72.4% and a Matthews correlation coefficient of 75%. It improves the lot-release process and cost efficiency over heuristic methods.

Practical implications

The framework has been tested on real-world data and shows promising results in improving lot-release decisions and reducing complaints and costs. It enables companies to adjust predictive models by changing only the threshold, eliminating the need for retraining.

Originality/value

To the best of our knowledge, there is limited literature on using ML to predict customer complaints for the lot-release process in an automotive company. Our proposed framework integrates ML with a non-sampling approach, demonstrating its effectiveness in predicting complaints and reducing costs, fostering Quality 4.0.

Details

The TQM Journal, vol. 36 no. 9
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 5 June 2024

Anabela Costa Silva, José Machado and Paulo Sampaio

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine…

Abstract

Purpose

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine learning (ML) and even predictive models emerge as indispensable pillars. Given the relevance of these topics, the present study focused on the analysis of customer complaint data, employing ML techniques to anticipate complaint accountability. The primary objective was to enhance data accessibility, harnessing the potential of ML models to optimize the complaint handling process and thereby positively contribute to data-driven decision-making. This approach aimed not only to reduce the number of units to be analyzed and customer response time but also to underscore the pressing need for a paradigm shift in quality management. The application of AI techniques sought to enhance not only the efficiency of the complaint handling process and data accessibility but also to demonstrate how the integration of these innovative approaches could profoundly transform the way quality is conceived and managed within organizations.

Design/methodology/approach

To conduct this study, real customer complaint data from an automotive company was utilized. Our main objective was to highlight the importance of artificial intelligence (AI) techniques in the context of quality. To achieve this, we adopted a methodology consisting of 10 distinct phases: business analysis and understanding; project plan definition; sample definition; data exploration; data processing and pre-processing; feature selection; acquisition of predictive models; evaluation of the models; presentation of the results; and implementation. This methodology was adapted from data mining methodologies referenced in the literature, taking into account the specific reality of the company under study. This ensured that the obtained results were applicable and replicable across different fields, thereby strengthening the relevance and generalizability of our research findings.

Findings

The achieved results not only demonstrated the ability of ML models to predict complaint accountability with an accuracy of 64%, but also underscored the significance of the adopted approach within the context of Quality 4.0 (Q4.0). This study served as a proof of concept in complaint analysis, enabling process automation and the development of a guide applicable across various areas of the company. The successful integration of AI techniques and Q4.0 principles highlighted the pressing need to apply concepts of digitization and artificial intelligence in quality management. Furthermore, it emphasized the critical importance of data, its organization, analysis and availability in driving digital transformation and enhancing operational efficiency across all company domains. In summary, this work not only showcased the advancements achieved through ML application but also emphasized the pivotal role of data and digitization in the ongoing evolution of Quality 4.0.

Originality/value

This study presents a significant contribution by exploring complaint data within the organization, an area lacking investigation in real-world contexts, particularly focusing on practical applications. The development of standardized processes for data handling and the application of predictions for classification models not only demonstrated the viability of this approach but also provided a valuable proof of concept for the company. Most importantly, this work was designed to be replicable in other areas of the factory, serving as a fundamental basis for the company’s data scientists. Until then, limited data access and lack of automation in its treatment and analysis represented significant challenges. In the context of Quality 4.0, this study highlights not only the immediate advantages for decision-making and predicting complaint outcomes but also the long-term benefits, including clearer and standardized processes, data-driven decision-making and improved analysis time. Thus, this study not only underscores the importance of data and the application of AI techniques in the era of quality but also fills a knowledge gap by providing an innovative and replicable approach to complaint analysis within the organization. In terms of originality, this article stands out for addressing an underexplored area and providing a tangible and applicable solution for the company, highlighting the intrinsic value of aligning quality with AI and digitization.

Details

The TQM Journal, vol. 36 no. 9
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 31 May 2024

Prashanth Madhala, Hongxiu Li and Nina Helander

The information systems (IS) literature has indicated the importance of data analytics capabilities (DAC) in improving business performance in organizations. The literature has…

1363

Abstract

Purpose

The information systems (IS) literature has indicated the importance of data analytics capabilities (DAC) in improving business performance in organizations. The literature has also highlighted the roles of organizations’ data-related resources in developing their DAC and enhancing their business performance. However, little research has taken resource quality into account when studying DAC for business performance enhancement. Therefore, the purpose of this paper is to understand the impact of resource quality on DAC development for business performance enhancement.

Design/methodology/approach

We studied DAC development using the resource-based view and the IS success model based on empirical data collected via 19 semi-structured interviews.

Findings

Our findings show that data-related resource (including data, data systems, and data services) quality is vital to the development of DAC and the enhancement of organizations’ business performance. The study uncovers the factors that make up each quality dimension, which is required for developing DAC for business performance enhancement.

Originality/value

Using the resource quality view, this study contributes to the literature by exploring the role of data-related resource quality in DAC development and business performance enhancement.

Details

Industrial Management & Data Systems, vol. 124 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 26 September 2023

Alex Koohang, Carol Springer Sargent, Justin Zuopeng Zhang and Angelica Marotta

This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial…

Abstract

Purpose

This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial performance, market performance and customer satisfaction.

Design/methodology/approach

The research model focuses on whether (1) Big Data Analytics (BDA) leadership influences BDA talent quality, (2) BDA talent quality influences BDA security quality, (3) BDA talent quality influences BDA privacy quality, (4) BDA talent quality influences Innovation and (5) innovation influences a firm's performance (financial, market and customer satisfaction). An instrument was designed and administered electronically to a diverse set of employees (N = 188) in various organizations in the USA. Collected data were analyzed through a partial least square structural equation modeling.

Findings

Results showed that leadership significantly and positively affects BDA talent quality, which, in turn, significantly and positively impacts security quality, privacy quality and innovation. Moreover, innovation significantly and positively impacts firm performance. The theoretical and practical implications of the findings are discussed. Recommendations for future research are provided.

Originality/value

The study provides empirical evidence that leadership significantly and positively impacts BDA talent quality. BDA talent quality, in turn, positively impacts security quality, privacy quality and innovation. This is important, as these are all critical factors for organizations that collect and use big data. Finally, the study demonstrates that innovation significantly and positively impacts financial performance, market performance and customer satisfaction. The originality of the research results makes them a valuable addition to the literature on big data analytics. They provide new insights into the factors that drive organizational success in this rapidly evolving field.

Details

Industrial Management & Data Systems, vol. 123 no. 12
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. 80 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 3 February 2023

Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Abstract

Purpose

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Design/methodology/approach

This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.

Findings

With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.

Originality/value

The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 3 November 2022

Reza Edris Abadi, Mohammad Javad Ershadi and Seyed Taghi Akhavan Niaki

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of…

Abstract

Purpose

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of unstructured data in research information systems, it is necessary to divide the information into logical groupings after examining their quality before attempting to analyze it. On the other hand, data quality results are valuable resources for defining quality excellence programs of any information system. Hence, the purpose of this study is to discover and extract knowledge to evaluate and improve data quality in research information systems.

Design/methodology/approach

Clustering in data analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found. In this study, data extracted from an information system are used in the first stage. Then, the data quality results are classified into an organized structure based on data quality dimension standards. Next, clustering algorithms (K-Means), density-based clustering (density-based spatial clustering of applications with noise [DBSCAN]) and hierarchical clustering (balanced iterative reducing and clustering using hierarchies [BIRCH]) are applied to compare and find the most appropriate clustering algorithms in the research information system.

Findings

This paper showed that quality control results of an information system could be categorized through well-known data quality dimensions, including precision, accuracy, completeness, consistency, reputation and timeliness. Furthermore, among different well-known clustering approaches, the BIRCH algorithm of hierarchical clustering methods performs better in data clustering and gives the highest silhouette coefficient value. Next in line is the DBSCAN method, which performs better than the K-Means method.

Research limitations/implications

In the data quality assessment process, the discrepancies identified and the lack of proper classification for inconsistent data have led to unstructured reports, making the statistical analysis of qualitative metadata problems difficult and thus impossible to root out the observed errors. Therefore, in this study, the evaluation results of data quality have been categorized into various data quality dimensions, based on which multiple analyses have been performed in the form of data mining methods.

Originality/value

Although several pieces of research have been conducted to assess data quality results of research information systems, knowledge extraction from obtained data quality scores is a crucial work that has rarely been studied in the literature. Besides, clustering in data quality analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 31 October 2023

Yangze Liang and Zhao Xu

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…

Abstract

Purpose

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.

Design/methodology/approach

The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.

Findings

The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.

Originality/value

The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 26000