Search results

1 – 10 of over 14000
Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 26 September 2023

Alex Koohang, Carol Springer Sargent, Justin Zuopeng Zhang and Angelica Marotta

This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial…

Abstract

Purpose

This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial performance, market performance and customer satisfaction.

Design/methodology/approach

The research model focuses on whether (1) Big Data Analytics (BDA) leadership influences BDA talent quality, (2) BDA talent quality influences BDA security quality, (3) BDA talent quality influences BDA privacy quality, (4) BDA talent quality influences Innovation and (5) innovation influences a firm's performance (financial, market and customer satisfaction). An instrument was designed and administered electronically to a diverse set of employees (N = 188) in various organizations in the USA. Collected data were analyzed through a partial least square structural equation modeling.

Findings

Results showed that leadership significantly and positively affects BDA talent quality, which, in turn, significantly and positively impacts security quality, privacy quality and innovation. Moreover, innovation significantly and positively impacts firm performance. The theoretical and practical implications of the findings are discussed. Recommendations for future research are provided.

Originality/value

The study provides empirical evidence that leadership significantly and positively impacts BDA talent quality. BDA talent quality, in turn, positively impacts security quality, privacy quality and innovation. This is important, as these are all critical factors for organizations that collect and use big data. Finally, the study demonstrates that innovation significantly and positively impacts financial performance, market performance and customer satisfaction. The originality of the research results makes them a valuable addition to the literature on big data analytics. They provide new insights into the factors that drive organizational success in this rapidly evolving field.

Details

Industrial Management & Data Systems, vol. 123 no. 12
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 3 February 2023

Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Abstract

Purpose

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Design/methodology/approach

This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.

Findings

With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.

Originality/value

The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 3 November 2022

Reza Edris Abadi, Mohammad Javad Ershadi and Seyed Taghi Akhavan Niaki

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of…

Abstract

Purpose

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of unstructured data in research information systems, it is necessary to divide the information into logical groupings after examining their quality before attempting to analyze it. On the other hand, data quality results are valuable resources for defining quality excellence programs of any information system. Hence, the purpose of this study is to discover and extract knowledge to evaluate and improve data quality in research information systems.

Design/methodology/approach

Clustering in data analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found. In this study, data extracted from an information system are used in the first stage. Then, the data quality results are classified into an organized structure based on data quality dimension standards. Next, clustering algorithms (K-Means), density-based clustering (density-based spatial clustering of applications with noise [DBSCAN]) and hierarchical clustering (balanced iterative reducing and clustering using hierarchies [BIRCH]) are applied to compare and find the most appropriate clustering algorithms in the research information system.

Findings

This paper showed that quality control results of an information system could be categorized through well-known data quality dimensions, including precision, accuracy, completeness, consistency, reputation and timeliness. Furthermore, among different well-known clustering approaches, the BIRCH algorithm of hierarchical clustering methods performs better in data clustering and gives the highest silhouette coefficient value. Next in line is the DBSCAN method, which performs better than the K-Means method.

Research limitations/implications

In the data quality assessment process, the discrepancies identified and the lack of proper classification for inconsistent data have led to unstructured reports, making the statistical analysis of qualitative metadata problems difficult and thus impossible to root out the observed errors. Therefore, in this study, the evaluation results of data quality have been categorized into various data quality dimensions, based on which multiple analyses have been performed in the form of data mining methods.

Originality/value

Although several pieces of research have been conducted to assess data quality results of research information systems, knowledge extraction from obtained data quality scores is a crucial work that has rarely been studied in the literature. Besides, clustering in data quality analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 22 December 2022

Reihaneh Alsadat Tabaeeian, Behzad Hajrahimi and Atefeh Khoshfetrat

The purpose of this review paper was identifying barriers to the use of telemedicine systems in primary health-care individual level among professionals.

Abstract

Purpose

The purpose of this review paper was identifying barriers to the use of telemedicine systems in primary health-care individual level among professionals.

Design/methodology/approach

This study used Scopus and PubMed databases for scientific records identification. A systematic review of the literature structured by PRISMA guidelines was conducted on 37 included papers published between 2009 and 2019. A qualitative approach was used to synthesize insights into using telemedicine by primary care professionals.

Findings

Three barriers were identified and classified: system quality, data quality and service quality barriers. System complexity in terms of usability, system unreliability, security and privacy concerns, lack of integration and inflexibility of systems-in-use are related to system quality. Data quality barriers are data inaccuracy, data timeliness issues, data conciseness concerns and lack of data uniqueness. Finally, service reliability concerns, lack of technical support and lack of user training have been categorized as service quality barriers.

Originality/value

This review identified and mapped emerging themes of barriers to the use of telemedicine systems. This paper also through a new conceptualization of telemedicine use from perspectives of the primary care professionals contributes to informatics literature and system usage practices.

Details

Journal of Science and Technology Policy Management, vol. 15 no. 3
Type: Research Article
ISSN: 2053-4620

Keywords

Article
Publication date: 12 May 2022

Aws Al-Okaily, Manaf Al-Okaily, Ai Ping Teoh and Mutaz M. Al-Debei

Despite the increasing role of the data warehouse as a supportive decision-making tool in today's business world, academic research for measuring its effectiveness has been…

1891

Abstract

Purpose

Despite the increasing role of the data warehouse as a supportive decision-making tool in today's business world, academic research for measuring its effectiveness has been lacking. This paucity of academic interest stimulated us to evaluate data warehousing effectiveness in the organizational context of Jordanian banks.

Design/methodology/approach

This paper develops a theoretical model specific to the data warehouse system domain that builds on the DeLone and McLean model. The model is empirically tested by means of structural equation modelling applying the partial least squares approach and using data collected in a survey questionnaire from 127 respondents at Jordanian banks.

Findings

Empirical data analysis supported that data quality, system quality, user satisfaction, individual benefits and organizational benefits have made strong contributions to data warehousing effectiveness in our organizational data context.

Practical implications

The results provide a better understanding of the data warehouse effectiveness and its importance in enabling the Jordanian banks to be competitive.

Originality/value

This study is indeed one of the first empirical attempts to measure data warehouse system effectiveness and the first of its kind in an emerging country such as Jordan.

Details

EuroMed Journal of Business, vol. 18 no. 4
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 2 January 2024

Matti Juhani Haverila and Kai Christian Haverila

Big data marketing analytics (BDMA) has been discovered to be a key contributing factor to developing necessary marketing capabilities. This research aims to investigate the…

Abstract

Purpose

Big data marketing analytics (BDMA) has been discovered to be a key contributing factor to developing necessary marketing capabilities. This research aims to investigate the impact of the technology and information quality of BDMA on the critical marketing capabilities by differentiating between firms with low and high perceived market performance.

Design/methodology/approach

The responses were collected from marketing professionals familiar with BDMA in North America (N = 236). The analysis was done with partial least squares-structural equation modelling (PLS-SEM).

Findings

The results indicated positive and significant relationships between the information and technology quality as exogenous constructs and the endogenous constructs of the marketing capabilities of marketing planning, implementation and customer relationship management (CRM) with mainly moderate effect sizes. Differences in the path coefficients in the structural model were detected between firms with low and high perceived market performance.

Originality/value

This research indicates the critical role of technology and information quality in developing marketing capabilities. The study discovered heterogeneity in the sample population when using the low and high perceived market performance as the source of potential heterogeneity, the presence of which would likely cause a threat to the validity of the results in case heterogeneity is not considered. Thus, this research builds on previous research by considering this issue.

Article
Publication date: 27 April 2023

Aws Al-Okaily, Ai Ping Teoh, Manaf Al-Okaily, Mohammad Iranmanesh and Mohammed Azmi Al-Betar

There is a growing importance of business intelligence systems (BIS) adoption in today’s digital economy age which is characterized by uncertainty and ambiguity considering the…

Abstract

Purpose

There is a growing importance of business intelligence systems (BIS) adoption in today’s digital economy age which is characterized by uncertainty and ambiguity considering the magnitude and influence of data-related issues to be solved in contemporary businesses. This study aims to investigate critical success factors that affect business intelligence efficiency based on the DeLone and McLean model in Jordanian banking industry.

Design/methodology/approach

A quantitative research method through a questionnaire was used to collect data from actual users who depend on business intelligence tools to make operational and strategic decisions in Jordanian banks. The data obtained were tested using the partial least squares–structural equation modeling approach.

Findings

The survey findings attest that system quality, information quality, user quality, user satisfaction and user performance are important factors and contribute to business intelligence efficiency in the Jordanian banking industry.

Practical implications

The findings gained from this work can help policymakers in Jordanian banks to improve the business intelligence success and organizational performance.

Originality/value

To the best of the authors’ knowledge, this study is the first of its kind to propose a theoretical model to assess drivers of BIS efficiency from the Jordanian banks’ perspective.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Article
Publication date: 23 January 2024

Ranjit Roy Ghatak and Jose Arturo Garza-Reyes

The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by…

Abstract

Purpose

The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by incorporating Industry 4.0 technological innovations into existing quality management frameworks, signifying a significant evolution in quality control systems. Despite the evident advantages, the practical deployment in the Indian manufacturing sector encounters various obstacles. This research is dedicated to a thorough examination of these impediments. It is structured around a set of pivotal research questions: First, it seeks to identify the key barriers that impede the adoption of Quality 4.0. Second, it aims to elucidate these barriers' interrelations and mutual dependencies. Thirdly, the research prioritizes these barriers in terms of their significance to the adoption process. Finally, it contemplates the ramifications of these priorities for the strategic advancement of manufacturing practices and the development of informed policies. By answering these questions, the research provides a detailed understanding of the challenges faced. It offers actionable insights for practitioners and policymakers implementing Quality 4.0 in the Indian manufacturing sector.

Design/methodology/approach

Employing Interpretive Structural Modelling and Matrix Impact of Cross Multiplication Applied to Classification, the authors probe the interdependencies amongst fourteen identified barriers inhibiting Quality 4.0 adoption. These barriers were categorized according to their driving power and dependence, providing a richer understanding of the dynamic obstacles within the Technology–Organization–Environment (TOE) framework.

Findings

The study results highlight the lack of Quality 4.0 standards and Big Data Analytics (BDA) tools as fundamental obstacles to integrating Quality 4.0 within the Indian manufacturing sector. Additionally, the study results contravene dominant academic narratives, suggesting that the cumulative impact of organizational barriers is marginal, contrary to theoretical postulations emphasizing their central significance in Quality 4.0 assimilation.

Practical implications

This research provides concrete strategies, such as developing a collaborative platform for sharing best practices in Quality 4.0 standards, which fosters a synergistic relationship between organizations and policymakers, for instance, by creating a joint task force, comprised of industry leaders and regulatory bodies, dedicated to formulating and disseminating comprehensive guidelines for Quality 4.0 adoption. This initiative could lead to establishing industry-wide standards, benefiting from the pooled expertise of diverse stakeholders. Additionally, the study underscores the necessity for robust, standardized Big Data Analytics tools specifically designed to meet the Quality 4.0 criteria, which can be developed through public-private partnerships. These tools would facilitate the seamless integration of Quality 4.0 processes, demonstrating a direct route for overcoming the barriers of inadequate standards.

Originality/value

This research delineates specific obstacles to Quality 4.0 adoption by applying the TOE framework, detailing how these barriers interact with and influence each other, particularly highlighting the previously overlooked environmental factors. The analysis reveals a critical interdependence between “lack of standards for Quality 4.0” and “lack of standardized BDA tools and solutions,” providing nuanced insights into their conjoined effect on stalling progress in this field. Moreover, the study contributes to the theoretical body of knowledge by mapping out these novel impediments, offering a more comprehensive understanding of the challenges faced in adopting Quality 4.0.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 14000