Search results

1 – 10 of over 34000
Article
Publication date: 4 April 2016

Mahdi Zahedi Nooghabi and Akram Fathian Dastgerdi

One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for…

Abstract

Purpose

One of the most important categories in linked open data (LOD) quality models is “data accessibility.” The purpose of this paper is to propose some metrics and indicators for assessing data accessibility in LOD and the semantic web context.

Design/methodology/approach

In this paper, at first the authors consider some data quality and LOD quality models to review proposed subcategories for data accessibility dimension in related texts. Then, based on goal question metric (GQM) approach, the authors specify the project goals, main issues and some questions. Finally, the authors propose some metrics for assessing the data accessibility in the context of the semantic web.

Findings

Based on GQM approach, the authors determined three main issues for data accessibility, including data availability, data performance, and data security policy. Then the authors created four main questions related to these issues. As a conclusion, the authors proposed 27 metrics for measuring these questions.

Originality/value

Nowadays, one of the main challenges regarding data quality is the lack of agreement on widespread quality metrics and practical instruments for evaluating quality. Accessibility is an important aspect of data quality. However, few researches have been done to provide metrics and indicators for assessing data accessibility in the context of the semantic web. So, in this research, the authors consider the data accessibility dimension and propose a comparatively comprehensive set of metrics.

Details

Program, vol. 50 no. 2
Type: Research Article
ISSN: 0033-0337

Keywords

Book part
Publication date: 15 December 2016

Debbie H. Kim, Jeannette A. Colyvas and Allen K. Kim

Despite a legacy of research that emphasizes contradictions and their role in explaining change, less is understood about their character or the mechanisms that support them. This…

Abstract

Despite a legacy of research that emphasizes contradictions and their role in explaining change, less is understood about their character or the mechanisms that support them. This gap is especially problematic when making causal claims about the sources of institutional change and our overall conceptions of how institutions matter in social meanings and organizational practices. If we treat contradictions as a persistent societal feature, then a primary analytic task is to distinguish their prevalence from their effects. We address this gap in the context of US electoral discourse and education through an analysis of presidential platforms. We ask how contradictions take hold, persist, and might be observed prior to, or independently of, their strategic use. Through a novel combination of content analysis and computational linguistics, we observe contradictions in qualitative differences in form and quantitative differences in degree. Whereas much work predicts that ideologies produce contradictions between groups, our analysis demonstrates that they actually support convergence in meaning between groups while promoting contradiction within groups.

Article
Publication date: 22 November 2010

Eisenhower C. Etienne

This paper aims to show that the extent to which convergence/divergence of a company's quality policies and practices towards/away from those of Six Sigma benchmark policies and…

1113

Abstract

Purpose

This paper aims to show that the extent to which convergence/divergence of a company's quality policies and practices towards/away from those of Six Sigma benchmark policies and practices mirror and anticipate the divergence of its sigma metric (SMs) from quantitative Six Sigma benchmarks. Further, the paper proposes to evaluate the robustness of the quality processes of these three companies and to compare them to that of the Six Sigma benchmark by subjecting these processes to the twin performance shocks of the benchmark Six Sigma 1.5σ allowance for process drift and a 25 percent tightening of customer requirements.

Design/methodology/approach

Using a novel methodology more appropriate to the critical quality characteristics of typical service industry companies, the paper computes a set of SMs for each company that is richer and broader than the metrics found in standard Six Sigma tables. This new methodology is based on the empirically observed defect rates that are currently being generated by a service process. Further, based on the available empirical data, the paper compared these metrics to the Six Sigma benchmarks.

Findings

First, the paper shows that it is possible to compute a broad array of Six Sigma metrics for service businesses based on defect rate data. Second, the results confirm the central proposition of the research to the effect that the divergence/convergence of the qualitative characteristics of a company's quality system from benchmark Six Sigma policies and practices mirror and anticipate the convergence/divergence of the company's quality metrics from the Six Sigma benchmark. Third, the research produced the unanticipated result that the quantitative quality performance of high‐performing service businesses on the Six Sigma metrics are much lower than anticipated and below what is normally achieved by their manufacturing counterparts. The results were also used to do an evaluation of the Taguchi robustness of service processes.

Originality/value

First, the paper demonstrates that traditional Six Sigma computational methodology for generating Six Sigma metrics that is prevalent in manufacturing applies equally to service businesses. Second, the parallel convergence of the qualitative characteristics of a company's quality system towards Six Sigma practices and its quantitative metrics towards the Six Sigma benchmark means that primacy must be given to quality practices as the drivers of quality improvement. Third, the fact that high‐performing service businesses achieve Six Sigma measures that are so low compared to their manufacturing counterparts seems to point either to some key measurement challenges in deploying Six Sigma in service industries or to the need to further change Six Sigma methodology to make it more applicable to these businesses.

Details

International Journal of Lean Six Sigma, vol. 1 no. 4
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 4 April 2016

Maria Torres Vega, Vittorio Sguazzo, Decebal Constantin Mocanu and Antonio Liotta

The Video Quality Metric (VQM) is one of the most used objective methods to assess video quality, because of its high correlation with the human visual system (HVS). VQM is…

Abstract

Purpose

The Video Quality Metric (VQM) is one of the most used objective methods to assess video quality, because of its high correlation with the human visual system (HVS). VQM is, however, not viable in real-time deployments such as mobile streaming, not only due to its high computational demands but also because, as a Full Reference (FR) metric, it requires both the original video and its impaired counterpart. In contrast, No Reference (NR) objective algorithms operate directly on the impaired video and are considerably faster but loose out in accuracy. The purpose of this paper is to study how differently NR metrics perform in the presence of network impairments.

Design/methodology/approach

The authors assess eight NR metrics, alongside a lightweight FR metric, using VQM as benchmark in a self-developed network-impaired video data set. This paper covers a range of methods, a diverse set of video types and encoding conditions and a variety of network impairment test-cases.

Findings

The authors show the extent by which packet loss affects different video types, correlating the accuracy of NR metrics to the FR benchmark. This paper helps identifying the conditions under which simple metrics may be used effectively and indicates an avenue to control the quality of streaming systems.

Originality/value

Most studies in literature have focused on assessing streams that are either unaffected by the network (e.g. looking at the effects of video compression algorithms) or are affected by synthetic network impairments (i.e. via simulated network conditions). The authors show that when streams are affected by real network conditions, assessing Quality of Experience becomes even harder, as the existing metrics perform poorly.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 3 February 2017

Wiem Khlif, Hanêne Ben-Abdallah and Nourchène Elleuch Ben Ayed

Restructuring a business process (BP) model may enhance the BP performance and improve its understandability. So-far proposed restructuring methods use either refactoring which…

Abstract

Purpose

Restructuring a business process (BP) model may enhance the BP performance and improve its understandability. So-far proposed restructuring methods use either refactoring which focuses on structural aspects, social network discovery which uses semantic information to guide the affiliation process during its analysis, or social network rediscovery which uses structural information to identify clusters of actors according to their relationships. The purpose of this paper is to propose a hybrid method that exploits both the semantic and structural aspects of a BP model.

Design/methodology/approach

The proposed method first generates a social network from the BP model. Second, it applies hierarchical clustering to determine the performers’ partitions; this step uses the social context which specifies features related to performers, and two new distances that account for semantic and structural information. Finally, it applies a set of behavioral and organizational restructuring rules adapted from the graph optimization domain; each rule uses the identified performers’ partitions and the business context to reduce particular quality metrics.

Findings

The efficiency of the proposed method is illustrated through well-established complexity metrics. The illustration is made through the development of a tool that fully supports the proposed method and proposes a strategy for the application of the restructuring rules.

Originality/value

The proposed method has the merit of combining the semantic and structural aspects of a Business Process Modeling Notation model to identify restructuring operations whose ordered application reduces the complexity of the initial model.

Details

Business Process Management Journal, vol. 23 no. 1
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 6 May 2014

Kevin M. Taaffe, Robert William Allen and Lindsey Grigg

Performance measurements or metrics are that which measure a company's performance and behavior, and are used to help an organization achieve and maintain success. Without the use…

1162

Abstract

Purpose

Performance measurements or metrics are that which measure a company's performance and behavior, and are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. During the course of this study with Lockheed Martin, the research team was tasked with determining the effectiveness of the site's existing performance metrics that are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. The paper aims to discuss these issues.

Design/methodology/approach

Research indicates that there are five key elements that influence the success of a performance metric. A standardized method of determining whether or not a metric has the right mix of these elements was created in the form of a metrics scorecard.

Findings

The scorecard survey was successful in revealing good metric use, as well as problematic metrics. In the quality department, the Document Rejects metric has been reworked and is no longer within the executive's metric deck. It was also recommended to add root cause analysis, and to quantify and track the cost of non-conformance and the overall cost of quality. In total, the number of site wide metrics has decreased from 75 to 50 metrics. The 50 remaining metrics are undergoing a continuous improvement process in conjunction with the use of the metric scorecard tool developed in this research.

Research limitations/implications

The metrics scorecard should be used site-wide for an assessment of all metrics. The focus of this paper is on the metrics within the quality department.

Practical implications

Putting a quick and efficient metrics assessment technique in place was critical. With the leadership and participation of Lockheed Martin, this goal was accomplished.

Originality/value

This paper presents the process of metrics evaluation and the issues that were encountered during the process, including insights that would not have been easily documented without this mechanism. Lockheed Martin Company has used results from this research. Other industries could also apply the methods proposed here.

Details

Journal of Quality in Maintenance Engineering, vol. 20 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 4 October 2011

Lewlyn L.R. Rodrigues, Gopalakrishna Barkur, K.V.M. Varambally and Farahnaz Golrooy Motlagh

The choice between SERVQUAL and SERVPERF metrics for service quality measurement is subjective and the research literature lacks evidence on whether these instruments differ in…

10866

Abstract

Purpose

The choice between SERVQUAL and SERVPERF metrics for service quality measurement is subjective and the research literature lacks evidence on whether these instruments differ in their outcomes significantly or concur with each other. Hence, empirical investigation regarding the concurrence or difference of the two instruments is the purpose of this paper.

Design/methodology/approach

The research is qualitative (meta‐analysis of service quality literature) and quantitative (application of standard statistical procedures to test hypothesis). A pilot test of 35 students was conducted followed by a stratified random sampling of 84 students each for SERVQUAL and SERVPERF. Data collection was through a self‐administered questionnaire.

Findings

The empirical study proves that there is a significant difference in the outcomes of the two metrics. The implications of the study are based on the combined use of the two instruments. The research identified that tangibles and reliability are the two dimensions of higher service quality satisfaction, whereas empathy and assurance are the dimensions of least satisfaction in a higher education sector.

Research limitations/implications

Even though the sample size is adequate, the study outcome cannot be generalized completely as it is based on a research focused on a specific service.

Practical implications

The paper gives a methodical approach to apply both SERVQUAL and SERVPERF metrics and draw implications on the combined basis. The strengths and weaknesses thus identified would facilitate the service providers in implementing total quality management.

Social implications

Social responsibility is a key issue to be addressed by higher educational institutes and the implications of this research contribute to it strengthening.

Originality/value

Research inferences are based on the primary data obtained from service receivers of higher education and the inferences would add value to the body of knowledge of service quality literature, as the two most prominent instruments of service quality are empirically investigated for concurrence.

Article
Publication date: 10 August 2021

Dan Wu, Hao Xu, Wang Yongyi and Huining Zhu

Currently, countries worldwide are struggling with the virus COVID-19 and the severe outbreak it brings. To better benefit from open government health data in the fight against…

Abstract

Purpose

Currently, countries worldwide are struggling with the virus COVID-19 and the severe outbreak it brings. To better benefit from open government health data in the fight against this pandemic, this study developed a framework for assessing open government health data at the dataset level, providing a tool to evaluate current open government health data's quality and usability COVID-19.

Design/methodology/approach

Based on the review of the existing quality evaluation methods of open government data, the evaluation metrics and their weights were determined by 15 experts in health through the Delphi method and analytic hierarchy process. The authors tested the framework's applicability using open government health data related to COVID-19 in the US, EU and China.

Findings

The results of the test capture the quality difference of the current open government health data. At present, the open government health data in the US, EU and China lacks the necessary metadata. Besides, the number, richness of content and timeliness of open datasets need to be improved.

Originality/value

Unlike the existing open government data quality measurement, this study proposes a more targeted open government data quality evaluation framework that measures open government health data quality on a range of data quality dimensions with a fine-grained measurement approach. This provides a tool for accurate assessment of public health data for correct decision-making and assessment during a pandemic.

Article
Publication date: 13 April 2021

Vincent Barre, David Orlando Ramos, Charles Medovich, Gabriela Lovera and Matthew Hoch

The paper provides insight on the customer experience through product performance (CxPP) initiative; which was developed by Johnson & Johnson Vision to monitor and conduct product…

Abstract

Purpose

The paper provides insight on the customer experience through product performance (CxPP) initiative; which was developed by Johnson & Johnson Vision to monitor and conduct product performance improvements to enhance the customer experience effort and protect sales. The piece explains the basic tenets for CxPP execution and upkeep. It also explains the methods used to create, evaluate and monitor the CxPP initiative while illustrating the ways in which the initiative functions and adds value to any firm implementing it.

Design/methodology/approach

The paper utilizes a descriptive approach to explain the basic tenants of the CxPP initiative. The paper utilizes the define, measure, analyze, improve and control (DMAIC) framework to explain the tenets of the CxPP initiative. Each section of the paper utilizes descriptions of internal processes and research to further explain and justify implementation of the CxPP imitative across firms. Moreover, the piece explains the methods used to create, evaluate and monitor the CxPP initiative while illustrating the ways in which the initiative functions and adds value to the firm.

Findings

According to JJV Quality Assurance experts, the CxPP initiative is a long-term approach that supports synergy across departments to enhance product quality, improve customer satisfaction and protect sales. By implementing the CxPP approach, JJV was able to uncover and solve four distinct defect categories that affect product quality and customer experience, thus demonstrating the importance and benefits of the CxPP initiative for any organization.

Research limitations/implications

Due to the chosen research approach, the study lacks specificity. As a result, it is recommended that future implementation of the proposed initiative opts for more testable propositions.

Practical implications

Due to competitive considerations, note that no empirical data will be shared in the findings. The scaling of the principles of this approach should be universal. But the execution; types of projects, type of customer need and feedback should be specific to each environment.

Originality/value

This paper fulfills an identified need to study the relationship between product quality, customer satisfaction and sales.

Details

The TQM Journal, vol. 33 no. 8
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 29 November 2018

Jose Luis Ortega

The purpose of this paper is to analyse the metrics provided by Publons about the scoring of publications and their relationship with impact measurements (bibliometric and…

2506

Abstract

Purpose

The purpose of this paper is to analyse the metrics provided by Publons about the scoring of publications and their relationship with impact measurements (bibliometric and altmetric indicators).

Design/methodology/approach

In January 2018, 45,819 research articles were extracted from Publons, including all their metrics (scores, number of pre and post reviews, reviewers, etc.). Using the DOI identifier, other metrics from altmetric providers were gathered to compare the scores of those publications in Publons with their bibliometric and altmetric impact in PlumX, Altmetric.com and Crossref Event Data.

Findings

The results show that: there are important biases in the coverage of Publons according to disciplines and publishers; metrics from Publons present several problems as research evaluation indicators; and correlations between bibliometric and altmetric counts and the Publons metrics are very weak (r<0.2) and not significant.

Originality/value

This is the first study about the Publons metrics at article level and their relationship with other quantitative measures such as bibliometric and altmetric indicators.

Details

Aslib Journal of Information Management, vol. 71 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 34000