Search results
1 – 10 of over 186000Mahdi Zahedi Nooghabi and Akram Fathian Dastgerdi
One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for…
Abstract
Purpose
One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for assessing data accessibility in LOD and the semantic web context.
Design/methodology/approach
In this paper, at first the authors consider some data quality and LOD quality models to review proposed subcategories for data accessibility dimension in related texts. Then, based on goal question metric (GQM) approach, the authors specify the project goals, main issues and some questions. Finally, the authors propose some metrics for assessing the data accessibility in the context of the semantic web.
Findings
Based on GQM approach, the authors determined three main issues for data accessibility, including data availability, data performance, and data security policy. Then the authors created four main questions related to these issues. As a conclusion, the authors proposed 27 metrics for measuring these questions.
Originality/value
Nowadays, one of the main challenges regarding data quality is the lack of agreement on widespread quality metrics and practical instruments for evaluating quality. Accessibility is an important aspect of data quality. However, few researches have been done to provide metrics and indicators for assessing data accessibility in the context of the semantic web. So, in this research, the authors consider the data accessibility dimension and propose a comparatively comprehensive set of metrics.
Details
Keywords
Alex Koohang, Carol Springer Sargent, Justin Zuopeng Zhang and Angelica Marotta
This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial…
Abstract
Purpose
This paper aims to propose a research model with eight constructs, i.e. BDA leadership, BDA talent quality, BDA security quality, BDA privacy quality, innovation, financial performance, market performance and customer satisfaction.
Design/methodology/approach
The research model focuses on whether (1) Big Data Analytics (BDA) leadership influences BDA talent quality, (2) BDA talent quality influences BDA security quality, (3) BDA talent quality influences BDA privacy quality, (4) BDA talent quality influences Innovation and (5) innovation influences a firm's performance (financial, market and customer satisfaction). An instrument was designed and administered electronically to a diverse set of employees (N = 188) in various organizations in the USA. Collected data were analyzed through a partial least square structural equation modeling.
Findings
Results showed that leadership significantly and positively affects BDA talent quality, which, in turn, significantly and positively impacts security quality, privacy quality and innovation. Moreover, innovation significantly and positively impacts firm performance. The theoretical and practical implications of the findings are discussed. Recommendations for future research are provided.
Originality/value
The study provides empirical evidence that leadership significantly and positively impacts BDA talent quality. BDA talent quality, in turn, positively impacts security quality, privacy quality and innovation. This is important, as these are all critical factors for organizations that collect and use big data. Finally, the study demonstrates that innovation significantly and positively impacts financial performance, market performance and customer satisfaction. The originality of the research results makes them a valuable addition to the literature on big data analytics. They provide new insights into the factors that drive organizational success in this rapidly evolving field.
Details
Keywords
Franziska Franke and Martin R.W. Hiebl
Existing research on the relationship between big data and organizational decision quality is still few and far between, and what does exist often assumes direct effects of big…
Abstract
Purpose
Existing research on the relationship between big data and organizational decision quality is still few and far between, and what does exist often assumes direct effects of big data on decision quality. More recent research indicates that such direct effects may be too simplistic, and in particular, an organization’s overall human skills are often not considered sufficiently. Inspired by the knowledge-based view, we therefore propose that interactions between three aspects of big data usage and management accountants’ data analytics skills may be key to reaching high-quality decisions. The purpose of this study is to test these predictions based on a survey of US firms.
Design/methodology/approach
The authors draw on survey data from 140 US firms. This survey has been conducted via MTurk in 2020.
Findings
The results of the study show that the quality of big data sources is associated with higher perceived levels of decision quality. However, according to the results, the breadth of big data sources and a data-driven culture only improve decision quality if management accountants’ data analytics skills are highly developed. These results point to the important, but so far unexamined role of an organization’s management accountants and their skills for translating big data into high-quality decisions.
Practical implications
The present study highlights the importance of an organization’s human skills in creating value out of big data. In particular, the findings imply that management accountants may need to increasingly draw on data analytics skills to make the most out of big data for their employers.
Originality/value
This study is among the first, to the best of the authors’ knowledge, to provide empirical proof of the relevance of an organization’s management accountants and their data analytics skills for reaching desirable firm-level outcomes. In addition, this study thus adds to the further advancement of the knowledge-based view by providing evidence that in contemporary big-data environments, interactions between tacit and explicit knowledge seem crucial for driving desirable firm-level outcomes.
Details
Keywords
Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding
This study aims to evaluate a method of building a biomedical knowledge graph (KG).
Abstract
Purpose
This study aims to evaluate a method of building a biomedical knowledge graph (KG).
Design/methodology/approach
This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.
Findings
With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.
Originality/value
The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.
Details
Keywords
Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…
Abstract
Purpose
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.
Design/methodology/approach
Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.
Findings
Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.
Originality/value
The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.
Details
Keywords
Aws Al-Okaily, Ai Ping Teoh and Manaf Al-Okaily
A crucial question still remains unanswered as to whether data analytics-oriented business intelligence (hereafter, BI) technologies can bring organizational value and benefits…
Abstract
Purpose
A crucial question still remains unanswered as to whether data analytics-oriented business intelligence (hereafter, BI) technologies can bring organizational value and benefits. Thereby, several researchers called for further empirical research to extend the limited knowledge in this critical area. In an attempt to deal with this issue, we presented and tested a theoretical model to assess BI effectiveness at the organizational benefits level in this research article.
Design/methodology/approach
The suggested research model expands the application of the DeLone and McLean model in BI technology success or effectiveness research from individual level to organizational level. A cross-sectional survey is developed to obtain primary quantitative data from business and technology managers who are depending on BI technologies to make operational, technical and strategic decisions in Jordanian-listed firms.
Findings
Empirical findings show that system quality, information quality and training quality are significant predictors of user satisfaction, but not of perceived benefit. Data quality was found to be a strong predictor of both perceived benefit and user satisfaction. The influence of perceived benefit on user satisfaction was significant in turn both factors positively affect organizational benefits.
Originality/value
This research paper is a pioneering effort to assess BI technology effectiveness at an organizational level outside the context of developed countries. To the best of the authors’ knowledge, no prior research has combined all dimensions used in this research in one single model.
Details
Keywords
Reza Edris Abadi, Mohammad Javad Ershadi and Seyed Taghi Akhavan Niaki
The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of…
Abstract
Purpose
The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of unstructured data in research information systems, it is necessary to divide the information into logical groupings after examining their quality before attempting to analyze it. On the other hand, data quality results are valuable resources for defining quality excellence programs of any information system. Hence, the purpose of this study is to discover and extract knowledge to evaluate and improve data quality in research information systems.
Design/methodology/approach
Clustering in data analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found. In this study, data extracted from an information system are used in the first stage. Then, the data quality results are classified into an organized structure based on data quality dimension standards. Next, clustering algorithms (K-Means), density-based clustering (density-based spatial clustering of applications with noise [DBSCAN]) and hierarchical clustering (balanced iterative reducing and clustering using hierarchies [BIRCH]) are applied to compare and find the most appropriate clustering algorithms in the research information system.
Findings
This paper showed that quality control results of an information system could be categorized through well-known data quality dimensions, including precision, accuracy, completeness, consistency, reputation and timeliness. Furthermore, among different well-known clustering approaches, the BIRCH algorithm of hierarchical clustering methods performs better in data clustering and gives the highest silhouette coefficient value. Next in line is the DBSCAN method, which performs better than the K-Means method.
Research limitations/implications
In the data quality assessment process, the discrepancies identified and the lack of proper classification for inconsistent data have led to unstructured reports, making the statistical analysis of qualitative metadata problems difficult and thus impossible to root out the observed errors. Therefore, in this study, the evaluation results of data quality have been categorized into various data quality dimensions, based on which multiple analyses have been performed in the form of data mining methods.
Originality/value
Although several pieces of research have been conducted to assess data quality results of research information systems, knowledge extraction from obtained data quality scores is a crucial work that has rarely been studied in the literature. Besides, clustering in data quality analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found.
Details
Keywords
Pratima Verma, Vimal Kumar, Ankesh Mittal, Bhawana Rathore, Ajay Jha and Muhammad Sabbir Rahman
This study aims to provide insight into the operational factors of big data. The operational indicators/factors are categorized into three functional parts, namely synthesis…
Abstract
Purpose
This study aims to provide insight into the operational factors of big data. The operational indicators/factors are categorized into three functional parts, namely synthesis, speed and significance. Based on these factors, the organization enhances its big data analytics (BDA) performance followed by the selection of data quality dimensions to any organization's success.
Design/methodology/approach
A fuzzy analytic hierarchy process (AHP) based research methodology has been proposed and utilized to assign the criterion weights and to prioritize the identified speed, synthesis and significance (3S) indicators. Further, the PROMETHEE (Preference Ranking Organization METHod for Enrichment of Evaluations) technique has been used to measure the data quality dimensions considering 3S as criteria.
Findings
The effective indicators are identified from the past literature and the model confirmed with industry experts to measure these indicators. The results of this fuzzy AHP model show that the synthesis is recognized as the top positioned and most significant indicator followed by speed and significance are developed as the next level. These operational indicators contribute toward BDA and explore with their sub-categories' priority.
Research limitations/implications
The outcomes of this study will facilitate the businesses that are contemplating this technology as a breakthrough, but it is both a challenge and opportunity for developers and experts. Big data has many risks and challenges related to economic, social, operational and political performance. The understanding of data quality dimensions provides insightful guidance to forecast accurate demand, solve a complex problem and make collaboration in supply chain management performance.
Originality/value
Big data is one of the most popular technology concepts in the market today. People live in a world where every facet of life increasingly depends on big data and data science. This study creates awareness about the role of 3S encountered during big data quality by prioritizing using fuzzy AHP and PROMETHEE.
Details
Keywords
Daniel P. Lorence and Robert Jameson
The growing acceptance of evidence‐based decision support systems in healthcare organizations has resulted in recognition of data quality improvement as a key area of both…
Abstract
The growing acceptance of evidence‐based decision support systems in healthcare organizations has resulted in recognition of data quality improvement as a key area of both strategic and operational management. Information managers are faced with their emerging role in establishing quality management standards for information collection and application in the day‐to‐day delivery of health care. In the USA, rigid data‐based practice and performance standards and regulations related to information management have met with some resistance from providers. In the emerging information‐intensive healthcare environment, managers are beginning to understand the importance of formal, continuous data quality assessment in health services delivery and quality management. Variation in data quality management practice poses quality problems in such an environment, since it precludes comparative assessments across larger markets or areas, a critical component of evidence‐based quality assessments. In this study a national survey of health information managers was employed to provide a benchmark of the degree of such variation, examining how quality management practices vary across area indicators. Findings here suggest that managers continue to employ paper‐based quality assessment audits, despite nationwide mandates to adopt system‐based measures using aggregate data analysis and automated quality intervention. The level of adoption of automated quality management methods in this study varied significantly across practice characteristics and areas, suggesting the existence of data quality barriers to cross‐market comparative assessment. Implications for healthcare service delivery in an evidence‐based environment are further examined and discussed.
Details
Keywords
Quality 4.0 (Q4.0) is related to quality management in the era of Industry 4.0 (I4.0). In particular, it concentrates on digital techniques used to improve organizational…
Abstract
Purpose
Quality 4.0 (Q4.0) is related to quality management in the era of Industry 4.0 (I4.0). In particular, it concentrates on digital techniques used to improve organizational capabilities and ensure the delivery of the best quality products and services to its customer. The aim of this research to examine the vital elements for the Q4.0 implementation.
Design/methodology/approach
A review of the literature was carried out to analyze past studies in this emerging research field.
Findings
This research identified ten factors that contribute to the successful implementation of Q4.0. The key factors are (1) data, (2) analytics, (3) connectivity, (4) collaboration, (5) development of APP, (6) scalability, (7) compliance, (8) organization culture, (9) leadership and (10) training for Q4.0.
Originality/value
As a result of the research, a new understanding of factors of successful implementation of Q4.0 in the digital transformation era can assist firms in developing new ways to implement Q4.0.
Details