Measurement in Marketing: Volume 19

Cover of Measurement in Marketing
Subject:

Table of contents

(10 chapters)
Abstract

Concepts equip the mind with thought, provide our theories with ideas, and assign variables for testing our hypotheses. Much of contemporary research deals with narrowly circumscribed concepts, termed simple concepts herein, which are the grist for much empirical inquiry in the field. In contrast to simple concepts, which exhibit a kind of unity, complex concepts are structures of simple concepts, and in certain instances unveil meaning going beyond simple concepts or their aggregation. When expressed in hylomorphic structures, complex concepts achieve unique ontological status and serve particular explanatory capabilities. We develop the philosophical foundation for hylomorphic structures and show how they are rooted in dispositions, dispositional causality, and various mind–body trade-offs. Examples are provided for this emerging perspective on “Big concepts” or “Big Ideas.”

Abstract

The assumption that a set of observed variables is a function of an underlying common factor plus some error has dominated measurement in marketing and the social sciences in general for decades. This view of measurement comes with assumptions, which, however, are rarely discussed in research. In this article, we question the legitimacy of several of these assumptions, arguing that (1) the common factor model is rarely correct in the population, (2) the common factor does not correspond to the quantity the researcher intends to measure, and (3) the measurement error does not fully capture the uncertainty associated with measurement. Our discussions call for a fundamental rethinking of measurement in the social sciences. Adapting an uncertainty-centric approach to measurement, which has become the norm in in the physical sciences, offers a means to address the limitations of current measurement practice in marketing.

Abstract

This chapter draws from an understanding of measurement error to address practical issues that arise in measurement and research design in the day-to-day conduct of research. The topics include constructs and measurement error, the measure development process, and the indicators of measurement error. The discussion covers types of measurement error, types of measures, and common scenarios in conducting research, linking measurement to research design.

Abstract

In business and management, cross-national and cross-cultural comparisons between countries have been a topic of interest for many decades. Not only do firms engage in business in different countries around the world but also within countries. The population has become more diversified over time, making cross-cultural comparisons within country boundaries increasingly relevant. In comparisons across cultural groups, measurement invariance (MI) is a prerequisite; however, in practice, MI is not always attained or even tested. Our study consists of three parts. First, we provide a bibliometric analysis of articles on cross-cultural and cross-national topics in marketing to provide insight into the connections between the articles and the main themes. Second, we code articles to assess whether researchers follow the recommended steps as outlined in the multigroup confirmatory factor analysis (MGCFA) approach. The results indicate that MI testing is incorporated in the toolbox of many empirical researchers in marketing and that articles often report the level of invariance. Yet, most studies find partial invariance, meaning that some items are not comparable across the cultural groups studied. Researchers understand that MI is required, but they often ignore noninvariant items, which may decrease the validity of cross-cultural comparisons made. Third, we analyze the dissemination of MI in the broader literature based on co-citations with Steenkamp and Baumgartner (1998), a widely cited article on MI in the field of marketing. We conclude by noting methodological developments in cross-cultural research to enable addressing noninvariance and providing suggestions to further advance our insight into cross-cultural differences and similarities.

Abstract

Careless responders are respondents who lack the motivation to answer survey questions accurately. Empirical findings can be significantly distorted when some respondents devote insufficient effort to the survey task, and researchers therefore attempt to identify such respondents. Many measures of careless responding have been suggested in the literature, but researchers frequently struggle with the selection and appropriate use of the available methods. This chapter offers a classification of existing measures of careless responding along two dimensions and presents a conceptual discussion of their relative strengths and weaknesses. An empirical study demonstrates how the various measures can be used to identify careless responders and how these measures are related to each other.

Abstract

Identifying the dimensionality of a construct and selecting appropriate items for measuring the dimensions are important elements of marketing scale development. Scales for measuring marketing constructs such as service quality, brand equity, and marketing orientation have typically been developed using the influential classical test theory paradigm (Churchill, 1979), or some variant thereof. Users of the paradigm typically assume, albeit implicitly, that items and respondents are the only sources of variance and respondents are the objects of measurement. Yet, marketers need scales for other important managerial purposes, such as benchmarking, tracking, and perceptual mapping, each of which requires a scaling of objects other than respondents such as products, brands, retail stores, websites, firms, advertisements, or social media content. Scales that are developed without such objects in mind might not perform as expected. Finn and Kayande (2005) proposed a multivariate multiple objective random effects methodology (referred to here as M-MORE) could be used to identify construct dimensionality and select appropriate items for multiple objects of measurement. This chapter applies M-MORE to multivariate generalizability theory data collected to assess online retailer websites in the early 2000s to identify the dimensionality of and to select appropriate items for scaling website quality. The results are compared with those produced by traditional methods.

Abstract

The idea that a significant portion of what consumers do, feel, and think is driven by automatic (or “implicit”) cognitive processes has sparked a wave of interest in the development of assessment tools that (attempt to) capture cognitive processes under automaticity conditions (also known as “implicit measures”). However, as more and more implicit measures are developed, it is becoming increasingly difficult for consumer scientists and marketing professionals to select the most appropriate tool for a specific research question. We therefore present a systematic overview of the criteria that can be used to evaluate and compare different implicit measures, including their structural characteristics, the extent to which (and the way in which) they qualify as “implicit,” as well as more practical considerations such as ease of implementation and the user experience of the respondents. As an example, we apply these criteria to four implicit measures that are (or have the potential to become) popular in marketing research (i.e., the implicit association test, the evaluative priming task, the affect misattribution procedure, and the propositional evaluation paradigm).

Cover of Measurement in Marketing
DOI
10.1108/S1548-6435202219
Publication date
2022-09-12
Book series
Review of Marketing Research
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-80043-631-2
eISBN
978-1-80043-630-5
Book series ISSN
1548-6435