Search results
1 – 10 of 319Thomas Salzberger, Marko Sarstedt and Adamantios Diamantopoulos
This paper aims to critically comment Rossiter’s “How to use C-OAR-SE to design optimal standard measures” in the current issue of EJM and provides a broader perspective on…
Abstract
Purpose
This paper aims to critically comment Rossiter’s “How to use C-OAR-SE to design optimal standard measures” in the current issue of EJM and provides a broader perspective on Rossiter’s C-OAR-SE framework and measurement practice in marketing in general.
Design/methodology/approach
The paper is conceptual, based on interpretation of measurement theory.
Findings
The paper shows that, at best, Rossiter’s mathematical dismissal of convergent validity applies to the completely hypothetical (and highly unlikely) situation where a perfect measure without any error would be available. Further considerations cast serious doubt on the appropriateness of Rossiter’s concrete object, dual subattribute-based single item measures. Being immunized against any piece of empirical evidence, C-OAR-SE cannot be considered a scientific theory and is bound to perpetuate, if not aggravate, the fundamental flaws in current measurement practice. While C-OAR-SE indeed helps generate more content valid instruments, the procedure offers no insights as to whether these instruments work properly to be used in research and practice.
Practical implications
This paper concludes that great caution needs to be exercised before adapting measurement instruments based on the C-OAR-SE procedure, and statistical evidence remains essential for validity assessment.
Originality/value
This paper identifies several serious conceptual and operational problems in Rossiter’s C-OAR-SE procedure and discusses how to align measurement in the social sciences to be compatible with the definition of measurement in the physical sciences.
Details
Keywords
This paper aims to respond to claims by Collier and Bienstock and Rossiter that reflective measurement is wrong for internet retailing service quality (IRSQ). The research…
Abstract
Purpose
This paper aims to respond to claims by Collier and Bienstock and Rossiter that reflective measurement is wrong for internet retailing service quality (IRSQ). The research empirically assesses Rossiter's proposal that the C‐OAR‐SE procedure for index development will generate a more valid way to measure IRSQ than is otherwise available.
Design/methodology/approach
C‐OAR‐SE is used to develop a formative IRSQ index. The index is administered to internet shoppers in an online survey. The index is compared with an existing IRSQ scale in terms of content, parsimony, measurement scores and criterion validity.
Findings
The scale and index display parity in content, parsimony and measurement scores, while the scale shows higher criterion validity. The results contradict Rossiter's claims and foster doubt regarding the usefulness of C‐OAR‐SE's formative measurement procedures.
Research limitations/implications
IRSQ can be conceptualised as reflective or formative, but C‐OAR‐SE does not necessarily generate a better way to measure the construct. Furthermore, implementing C‐OAR‐SE unearths problems with the procedure.
Practical implications
Multiple variations of IRSQ exist, as well as multiple views on how to measure the variations and differing degrees to which the variations are actually measured. Crucially, the situation is not as bleak as Collier and Bienstock or Rossiter suggest: the literature does offer sound, valid IRSQ measurement scales.
Originality/value
The paper resolves unwarranted criticisms of IRSQ scales, highlights the limitations with some scales, offers the first complete example of using C‐OAR‐SE to develop a new index and lends applied support to theoretical criticisms of C‐OAR‐SE.
Details
Keywords
New measures in marketing are invariably created by using a psychometric approach based on Churchill's “scale development” procedure. This paper aims to compare and contrast…
Abstract
Purpose
New measures in marketing are invariably created by using a psychometric approach based on Churchill's “scale development” procedure. This paper aims to compare and contrast Churchill's procedure with Rossiter's content‐validity approach to measurement, called C‐OAR‐SE.
Design/methodology approach
The comparison of the two procedures is by rational argument and forms the theoretical first half of the paper. In the applied second half of the paper, three recent articles from the Journal of Marketing (JM) that introduce new constructs and measures are criticized and corrected from the C‐OAR‐SE perspective.
Findings
The C‐OAR‐SE method differs from Churchill's method by arguing for: total emphasis on achieving high content validity of the item(s) and answer scale – without which nothing else matters; use of single‐item measures for “basic” constructs and for the first‐order components of “abstract” constructs; abandonment of the “reflective” measurement model, along with its associated statistical techniques of factor analysis and coefficient alpha, arguing that all abstract constructs must be measured as “formative”; and abandonment of external validation methods, notably multitrait‐multimethod analysis (MTMM) and structural equation modeling (SEM), to be replaced by internal content‐validation of the measure itself. The C‐OAR‐SE method can be applied – as demonstrated in the last part of the article – by any verbally intelligent researcher. However, less confident researchers may need to seek the assistance of one or two colleagues who fully understand the new method.
Practical implications
If a measure is not highly content‐valid to begin with – and none of the new measures in the JM articles criticized is highly content‐valid – then no subsequent psychometric properties can save it. Highly content‐valid measures are absolutely necessary for proper tests of theories and hypotheses, and for obtaining trustworthy findings in marketing.
Originality/value
C‐OAR‐SE is completely original and Rossiter's updated version should be followed. C‐OAR‐SE is leading the necessary marketing measurement revolution.
Details
Keywords
This paper aims to offer a comment on Rossiter’s C-OAR-SE article and suggests future developments for the practical application of C-OAR-SE that would promote its usage and…
Abstract
Purpose
This paper aims to offer a comment on Rossiter’s C-OAR-SE article and suggests future developments for the practical application of C-OAR-SE that would promote its usage and acceptance among marketing scholars.
Design/methodology/approach
This is a theoretical paper.
Findings
The paper identifies challenging steps in the practical application of C-OAR-SE and suggests that these can be overcome by developing detailed guidelines.
Research limitations/implications
Improving marketing measurement practice is of importance to marketing scholars.
Originality/value
The paper suggests how measure development could be structured in a manner that would reduce the subjective element of content validity assessment.
Details
Keywords
Nick Lee and John Cadogan
This paper provides a balanced commentary on Rossiter’s paper “How to use C-OAR-SE to design optimal standard measures” in this issue of the “European Journal of Marketing”. It…
Abstract
Purpose
This paper provides a balanced commentary on Rossiter’s paper “How to use C-OAR-SE to design optimal standard measures” in this issue of the “European Journal of Marketing”. It also relates the comments in general to Rossiter’s other C-OAR-SE work and throws light on a number of key measurement issues that seem under-appreciated at present in marketing and business research.
Design/methodology/approach
The authors use conceptual argument based on measurement theory and philosophy of science.
Findings
The authors find that Rossiter’s work makes a number of important points that are necessary in the current stage of development of marketing and social science. However, the authors also find that many of these points are also well made by fundamental measurement theories. When measurement theory is correctly interpreted, the idea of multiple measures of the same thing is not problematic. However, they show that existing social science measurement practice rarely takes account of the important issues at play here.
Practical implications
The authors show that marketing, management and social science researchers need to get better in terms of their appreciation of measurement theory and in their practices of measurement.
Originality/value
The authors identify a number of areas where marketing and social science measurement can be improved, taking account of the important aspects of C-OAR-SE and incorporating them in good practice, without needlessly avoiding existing good practices.
Details
Keywords
This paper aims to extend Rossiter’s C-OAR-SE method of measure design (IJRM, 2002, p. 19, p. 4, pp. 305-335; EJM, 2011, p. 45, p. 11, p. 12, pp. 1561-1588) by proposing five…
Abstract
Purpose
This paper aims to extend Rossiter’s C-OAR-SE method of measure design (IJRM, 2002, p. 19, p. 4, pp. 305-335; EJM, 2011, p. 45, p. 11, p. 12, pp. 1561-1588) by proposing five distinct construct models for designing optimally content-valid multiple-item and single-item measures.
Design/methodology/approach
The paper begins by dismissing convergent validation, the core procedure in Nunnally’s (1978) and Churchill’s (1979) psychometric method of measure design which allows alternative measures of the same construct. The method of dismissal is the mathematical demonstration that an alternative measure, no matter how highly its scores converge with those from the original measure, will inevitably produce different findings. The only solution to this knowledge-threatening problem is to agree on an optimal measure of each of our major constructs and to use only that measure in all future research, as is standard practice in the physical sciences. The paper concludes by proposing an extension of Rossiter’s C-OAR-SE method to design optimal standard measures of judgment constructs, the most prevalent type of construct in marketing.
Findings
The findings are, first, the mathematical dismissal of the accepted practice of convergent validation of alternative measures of the same construct, which paves the way for, second, the proposal of five new C-OAR-SE-based construct models for designing optimal standard measures of judgment constructs, three of which require a multiple-item measure and two of which a single-item measure.
Practical implications
The common practice of accepting alternative measures of the same construct causes major problems for the social sciences: when different measures are used, it becomes impossible, except by remote chance, to replicate findings; meta-analyses become meaningless because the findings are averaged over different measures; and empirical generalizations cannot be trusted when measures are changed. These problems mean that we cannot continue to accept alternative measures of the constructs and that, for each construct, an optimal standard measure must be found.
Originality/value
The ideas in this paper, which have untold value for the future of marketing as a legitimate science, are unique to Rossiter’s C-OAR-SE method of measure design.
Details
Keywords
Edward E. Rigdon, Kristopher J. Preacher, Nick Lee, Roy D. Howell, George R. Franke and Denny Borsboom
This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C‐OAR‐SE…
Abstract
Purpose
This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C‐OAR‐SE framework, and measurement practice in marketing in general.
Design/methodology/approach
The paper is purely theoretical, based on interpretation of measurement theory.
Findings
The authors find that much of Rossiter's diagnosis of the problems facing measurement practice in marketing and social science is highly relevant. However, the authors find themselves opposed to the revolution advocated by Rossiter.
Research limitations/implications
The paper presents a comment based on interpretation of measurement theory and observation of practices in marketing and social science. As such, the interpretation is itself open to disagreement.
Practical implications
There are implications for those outside academia who wish to use measures derived from academic work as well as to derive their own measures of key marketing and other social variables.
Originality/value
This paper is one of the few to explicitly respond to the C‐OAR‐SE framework proposed by Rossiter, and presents a number of points critical to good measurement theory and practice, which appear to remain underdeveloped in marketing and social science.
Details
Keywords
Karsten Hadwich, Dominik Georgi, Sven Tuzovic, Julia Büttner and Manfred Bruhn
Health service quality is an important determinant for health service satisfaction and behavioral intentions. The purpose of this paper is to investigate requirements of e‐health…
Abstract
Purpose
Health service quality is an important determinant for health service satisfaction and behavioral intentions. The purpose of this paper is to investigate requirements of e‐health services and to develop a measurement model to analyze the construct of “perceived e‐health service quality.”
Design/methodology/approach
The paper adapts the C‐OAR‐SE procedure for scale development by Rossiter. The focal aspect is the “physician‐patient relationship” which forms the core dyad in the healthcare service provision. Several in‐depth interviews were conducted in Switzerland; first with six patients (as raters), followed by two experts of the healthcare system (as judges). Based on the results and an extensive literature research, the classification of object and attributes is developed for this model.
Findings
The construct e‐health service quality can be described as an abstract formative object and is operationalized with 13 items: accessibility, competence, information, usability/user friendliness, security, system integration, trust, individualization, empathy, ethical conduct, degree of performance, reliability, and ability to respond.
Research limitations/implications
Limitations include the number of interviews with patients and experts as well as critical issues associated with C‐OAR‐SE. More empirical research is needed to confirm the quality indicators of e‐health services.
Practical implications
Health care providers can utilize the results for the evaluation of their service quality. Practitioners can use the hierarchical structure to measure service quality at different levels. The model provides a diagnostic tool to identify poor and/or excellent performance with regard to the e‐service delivery.
Originality/value
The paper contributes to knowledge with regard to the measurement of e‐health quality and improves the understanding of how customers evaluate the quality of e‐health services.
Details
Keywords
This paper aims to present a meta‐analysis to test the competing hypotheses that arise from structural contingency theory (SCT) and transaction cost analysis (TCA) about how…
Abstract
Purpose
This paper aims to present a meta‐analysis to test the competing hypotheses that arise from structural contingency theory (SCT) and transaction cost analysis (TCA) about how environmental uncertainty affects the strategic marketing policy of forward vertical integration (VI).
Design/methodology/approach
The meta‐analysis focuses on directional hypothesis tests rather than effect sizes in a set of 84 tests drawn from 29 published studies. It employs the C‐OAR elements of Rossiter's C‐OAR‐SE framework to validate construct definitions and their measures within these studies.
Findings
Although the results confirm the TCA hypothesis that asset specificity favours forward VI, the results for the effect of environmental uncertainty on forward VI do not favour either of the two competing hypotheses, but instead show that methodological orientation, international context, and historical time period moderate effects across studies and tests.
Research limitations/implications
The findings imply that the relative ability of SCT and TCA to predict the effect of environmental uncertainty on forward VI in marketing may reflect real historical patterns of integration activity as a response to uncertain environments. In particular, in a new economy characterised by globalisation and rapid technological change, an increasing number of industries may face levels of uncertainty so high that they are increasingly less manageable through forward VI, a situation that broadly though tentatively favours the SCT prediction.
Originality/value
Since no single empirical study has explicitly tested the implied competing uncertainty hypotheses from SCT and TCA, and since published meta‐analyses of VI have only considered the TCA hypothesis and never the competing SCT hypothesis and have in every instance pooled studies of backward and forward VI, this study develops a meta‐analysis for this purpose.
Details
Keywords
David A. Gilliam and Kevin Voss
Latent constructs represent the building blocks of marketing theory. The purpose of this paper is to provide marketing researchers with a practical procedure for writing construct…
Abstract
Purpose
Latent constructs represent the building blocks of marketing theory. The purpose of this paper is to provide marketing researchers with a practical procedure for writing construct definitions.
Design/methodology/approach
The paper reviews important contributions to construct definition in the literature from marketing, management, psychology and the philosophy of science. The authors expound construct definition in both practical and theoretical spheres to motivate the proposed procedure.
Findings
A six‐step procedure for construct definition and redefinition in marketing is developed. The proposed procedure addresses important aspects of definitions including the level of abstraction, scope, nomological relationships, explanatory and predictive power, ambiguity, vagueness, and preventing construct proliferation.
Research limitations/implications
While techniques for developing measures have received a great deal of attention, those for the earlier step of construct definition have not. Researchers will benefit from more precise definitions through improved model specification, better measures, and more reliable determination of the direction of causality. The role of the individual researcher's linguistic skill in construct definition must still be determined.
Practical implications
Marketing practitioners can also use the procedure to define latent constructs for which they must develop measures.
Originality/value
The literature on construct definition is fragmentary, scattered across disciplines and occasionally even arcane. It is further often descriptive of what a good definition looks like rather than prescriptive of how a good definition can be developed. The six steps are simple, broadly applicable, based on both theory and practical experience, consist of relatively few discrete steps, and feed directly into the modern measure development paradigm in marketing.
Details