Search results
1 – 10 of 62JOHN P. VAN GIGCH and L.L. PIPINO
General Systems Theory postulates the existence of many general theories that serve to describe isomorphisms across systems. The theory of Fuzzy Sets can be considered as one…
Abstract
General Systems Theory postulates the existence of many general theories that serve to describe isomorphisms across systems. The theory of Fuzzy Sets can be considered as one particular general theory which describes the phenomenon of ambiguity across all systems displaying this property and its consequences. Fuzzy Set Theory is a mathematical development that holds great promise in becoming the metalanguage of ambiguity, in a way parallel to Statistics and Probability Theory which represent the metalanguage of uncertainty. Fuzzy Sets appear particularly well suited to model ambiguity in the context of the systems paradigm which has been offered as a counterpart to the traditional science paradigm. A decision model is used to discuss the differences between these two paradigms and to show the role which Fuzzy Sets can play in resolving some of the epistemological problems in the domain of the social sciences.
Di Wang, Deborah Richards, Ayse Aysin Bilgin and Chuanfu Chen
Riccardo Albertoni, Monica De Martino and Paola Podestà
The purpose of this paper is to focus on the quality of the connections (linkset) among thesauri published as Linked Data on the Web. It extends the cross-walking measures with…
Abstract
Purpose
The purpose of this paper is to focus on the quality of the connections (linkset) among thesauri published as Linked Data on the Web. It extends the cross-walking measures with two new measures able to evaluate the enrichment brought by the information reached through the linkset (lexical enrichment, browsing space enrichment). It fosters the adoption of cross-walking linkset quality measures besides the well-known and deployed cardinality-based measures (linkset cardinality and linkset coverage).
Design/methodology/approach
The paper applies the linkset measures to the Linked Thesaurus fRamework for Environment (LusTRE). LusTRE is selected as testbed as it is encoded using a Simple Knowledge Organisation System (SKOS) published as Linked Data, and it explicitly exploits the cross-walking measures on its validated linksets.
Findings
The application on LusTRE offers an insight of the complementarities among the considered linkset measures. In particular, it shows that the cross-walking measures deepen the cardinality-based measures analysing quality facets that were not previously considered. The actual value of LusTRE’s linksets regarding the improvement of multilingualism and concept spaces is assessed.
Research limitations/implications
The paper considers skos:exactMatch linksets, which belong to a rather specific but a quite common kind of linkset. The cross-walking measures explicitly assume correctness and completeness of linksets. Third party approaches and tools can help to meet the above assumptions.
Originality/value
This paper fulfils an identified need to study the quality of linksets. Several approaches formalise and evaluate Linked Data quality focusing on data set quality but disregarding the other essential component: the connection among data.
Details
Keywords
Quality, an abstract concept, requires concrete definition in order to be actionable. This chapter moves the quality discussion from the theoretical to the workplace, building…
Abstract
Purpose
Quality, an abstract concept, requires concrete definition in order to be actionable. This chapter moves the quality discussion from the theoretical to the workplace, building steps needed to manage quality issues.
Methodology
The chapter reviews general data studies, web quality studies, and metadata quality studies to identify and define dimensions of data quality and quantitative measures for each concept. The chapter reviews preferred communication methods which make findings meaningful to administrators.
Practical implications
The chapter describes how quality dimensions are practically applied. It suggests criteria necessary to identify high priority populations, and resources in core subject areas or formats, as quality does not have to be completely uniform. The author emphasizes examining the information environment, documenting practice, and developing measurement standards. The author stresses that quality procedures must rapidly evolve to reflect local expectations, the local information environment, technology capabilities, and national standards.
Originality/value
This chapter combines theory with practical application. It stresses the importance of metadata and recognizes quality as a cyclical process which balances the necessity of national standards, the needs of the user, and the work realities of the metadata staff. This chapter identifies decision points, outlines future action, and explains communication options.
Mustafa Aljumaili, Ramin Karim and Phillip Tretten
The purpose of this paper is to develop data quality (DQ) assessment model based on content analysis and metadata analysis.
Abstract
Purpose
The purpose of this paper is to develop data quality (DQ) assessment model based on content analysis and metadata analysis.
Design/methodology/approach
A literature review of DQ assessment models has been conducted. A study of DQ key performances (KPIs) has been done. Finally, the proposed model has been developed and applied in a case study.
Findings
The results of this study shows that the metadata data have important information about DQ in a database and can be used to assess DQ to provide decision support for decision makers.
Originality/value
There is a lot of DQ assessment in the literature; however, metadata are not considered in these models. The model developed in this study is based on metadata in addition to the content analysis, to find a quantitative DQ assessment.
Details
Keywords
Robert Kleyle, Andre de Korvin and Khondkar Karim
In this paper we propose a strategy for investing in new companies for which there is relatively little hard data available. We use fuzzy set theory to represent these new…
Abstract
In this paper we propose a strategy for investing in new companies for which there is relatively little hard data available. We use fuzzy set theory to represent these new companies as finite fuzzy subsets of established companies for which there is a history of investment data. A fuzzy set is also used to represent the economic environment in which the proposed new investments will be made. From this fuzzy information we construct a fuzzy expected return for each new investment under consideration. These expected returns are then defuzzified, and those proposed investments whose defuzzified expected returns fail to meet some specified criteria are discarded. An investment strategy is then proposed for investing available capital in those new companies that meet the criteria.
Hamid Hassani, Azadeh Mohebi, M.J. Ershadi and Ammar Jalalimanesh
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video…
Abstract
Purpose
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video indexing. As lecture video indexing involves various steps, the proposed framework containing new dimensions, introduces new integrated approach for evaluating an indexing method or algorithm from the beginning to the end.
Design/methodology/approach
The emphasis in this study is on the fifth step of design science research methodology (DSRM), known as evaluation. That is, the methods that are developed in the field of lecture video indexing as an artifact, should be evaluated from different aspects. In this research, nine dimensions of data quality including accuracy, value-added, relevancy, completeness, appropriate amount of data, concise, consistency, interpretability and accessibility have been redefined based on previous studies and nominal group technique (NGT).
Findings
The proposed dimensions are implemented as new metrics to evaluate a newly developed lecture video indexing algorithm, LVTIA and numerical values have been obtained based on the proposed definitions for each dimension. In addition, the new dimensions are compared with each other in terms of various aspects. The comparison shows that each dimension that is used for assessing lecture video indexing, is able to reflect a different weakness or strength of an indexing method or algorithm.
Originality/value
Despite development of different methods for indexing lecture videos, the issue of data quality and its various dimensions have not been studied. Since data with low quality can affect the process of scientific lecture video indexing, the issue of data quality in this process requires special attention.
Details
Keywords
Andre de Korvin, Jerry Strawser and Philip H. Siegel
Accounting, particularly in the area of cost variance analysis, contains a great deal of ambiguity due to imprecise or ill‐defined control terms. Cost accountants must continually…
Abstract
Accounting, particularly in the area of cost variance analysis, contains a great deal of ambiguity due to imprecise or ill‐defined control terms. Cost accountants must continually incorporate good sense and professional judgment in the accounting process to overcome that ambiguity. Because of the construction of accounting expert systems, no ambiguity is present in the facts or rules, thereby excluding human reasoning and analysis of feedback within those systems. The use of fuzzy sets to build fuzzy control systems provides a method to incorporate ambiguity into expert systems, allowing expert systems to more closely emulate the complex human decision making process.
Details
Keywords
Charlie Silva Lopes, Denis Silva da Silveira and João Araujo
The primary concern of quality improvement in processes is not the input–output conversion but the information that enables and controls process. This paper presents process…
Abstract
Purpose
The primary concern of quality improvement in processes is not the input–output conversion but the information that enables and controls process. This paper presents process fragments for dimensions of information quality (IQ).
Design/methodology/approach
Research is based on the design science paradigm to create four fragments of reusable processes, that contemplate the following dimensions of IQ: accessibility, completeness, accuracy and consistency.
Findings
There is a theoretical discussion of the concept of IQ in process models, in which the fragments presented provide designers with a reduction in modeling time through reuse. Therefore, the designer has the flexibility to improve the IQ according to the context of each process.
Practical implications
The discussion is relevant for both researchers and business designers because it shows that IQ is essential to guarantee the efficient execution of processes.
Social implications
Processes modeling can be a challenge for inexperienced designers as they always try to solve a problem from the start, without worrying about the IQ dimensions in process models. Fragments here presented can be (re)used to guide these designers in processes modeling with more IQ.
Originality/value
Process modeling approaches provide expressive techniques but do not guarantee IQ in the models. However, these approaches present process fragments that can be easily used to contemplate IQ in process models. In this context, process fragments reuse stands out as an innovative solution to mitigate the shortcomings of process models related to IQ.
Details
Keywords
Mehwish Waheed, Kiran Kaur and Atika Qazi
– The purpose of this paper is to identify the unique d
Abstract
Purpose
The purpose of this paper is to identify the unique d
i
mensions associated with knowledge quality (KQ) based on students’ perception in an educational institution.
Design/methodology/approach
Purposive sampling was used to select students who were active users of the electronic-Learning (eLearning) system at two faculties in a single university. The qualitative data gathering employed an unstructured open-ended questionnaire distributed to the 52 selected participants.
Findings
The qualitative findings unearth the students’ perspective about quality of knowledge gained from content used in online courses. In total, 34 underlying sub-dimensions of KQ emerged, which were categorized into five KQ dimensions: intrinsic KQ, contextual KQ, representational KQ, accessible KQ, and actionable KQ.
Research limitations/implications
The findings provide an insight to educators to consider KQ dimensions in providing quality knowledge to students in an eLearning environment.
Originality/value
Previous studies have used information quality dimensions to measure KQ because of a lack of conceptualization of KQ that leads to difficulties in operationalizing this construct. In this study, a conceptual and operational definition of KQ, in the context of eLearning, is proposed based on grounded data from students participating in an online learning environment.
Details