Guest editorial

Victoria Crittenden (Babson College, Wellesley, Massachusetts, USA)
Marko Sarstedt (FWW, Fakultat fur Wirtschaftswissenschaft, Otto von Guericke Universitat Magdeburg, Magdeburg, Germany)
Claudia Astrachan (Institute of Business and Regional Economics, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland)
Joe Hair (University of South Alabama, Mobile, Alabama, USA)
Carlos Eduardo Lourenco (Department of Marketing, Fundacao Getulio Vargas Escola de Administracao de Empresas de Sao Paulo, Sao Paulo, Brazil)

Journal of Product & Brand Management

ISSN: 1061-0421

Article publication date: 23 June 2020

Issue publication date: 23 June 2020

1761

Citation

Crittenden, V., Sarstedt, M., Astrachan, C., Hair, J. and Lourenco, C.E. (2020), "Guest editorial", Journal of Product & Brand Management, Vol. 29 No. 4, pp. 409-414. https://doi.org/10.1108/JPBM-07-2020-009

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited


Measurement and scaling methodologies

Introduction

Branding has been a core topic in marketing for decades (Böger et al., 2018), with the belief that a strong brand can lead to a competitive advantage (Keller, 2001). There are three key dimensions of a company’s brand portfolio strategy that impact competitive advantage:

  1. scope (i.e. number of brands and number of market segments);

  2. competition (i.e. extent of similar positioning of brands within the company’s portfolio); and

  3. positioning (i.e. quality and price perceptions among consumers) (Morgan and Rego, 2009).

In their review of research in relation to a competitive advantage, Madden et al. (2006) found a much-published analysis suggesting a link between branding and the financial performance of a firm.

Brand assets, however, are difficult and expensive to develop and maintain (Aaker, 2004). Varadarajan et al. (2006) suggested that only a small number of brands in a firm’s brand portfolio have a large positive impact on the company’s image and reputation. Shah (2015) went so far as to suggest that marketer’s traditional 80–20 rule applied to brand portfolios. That is, 20% of the brands in the portfolio contribute 80% of the profits. Thus, a brand portfolio manager will often be tasked with reallocating resources toward brands with greatest opportunity to prevent a loss of competitive advantage (Hill et al., 2005).

With at least half of a company’s advertising dollars spent on brand-related efforts (VisionEdge Marketing, 2020), along with the emergence of social media platforms and the sophistication and availability of digital data, it is imperative that marketers assess whether our traditional understanding of branding practices still fits in the reality of the 21st-century marketplace. There is no doubt as to the importance of developing and maintaining a superior brand; however, the measurement of the brand is what will tie the brand investment to the financial performance of the firm.

Whitler and Regan (2019) offer four areas requiring brand measurement, namely:

  1. consumer knowledge;

  2. consumer perception;

  3. consumer behavior; and

  4. financial valuation.

Suggesting that chief marketing officers have to master multiple brand measurement and evaluation methodologies, Chatterjee et al. (2018) went so far as to add a sense of urgency to brand measurement, reminding marketers that branding never sleeps. In their review of brand management thinking, Veloutsou and Guzmán (2017) validated this evolution to a measurement focus by noting the trend toward more novel data collection processes and rigorous methodological approaches [e.g. structural equation modeling (SEM)].

Aspiring to be at the forefront of conversations regarding brand management, this special issue responds to a call by Veloutsou and Guzmán (2017) to address brand-changing phenomena. In particular, this special issue contributes to the evolution of brand management by attempting to capture the current state of reality in terms of brand measurement. The next section provides an overview of measurement issues faced by marketers. We then introduce the current state of brand measurement and scaling represented by the articles in this special issue.

Brand management and scale development

The process for developing measurement scales has evolved in the past 50 years, mostly as a result of the application of SEM. In particular, methods for developing, validating and adapting scales have improved considerably. Reliability methods now emphasize weighting the scale’s indicators using composite reliability, instead of assuming equally weighted indicators as with Cronbach’s Alpha. In addition, scale validity, which traditionally was evaluated qualitatively based on face validity, now routinely requires quantitative metrics such as convergent and discriminant validity. These metrics not only improve upon traditional approaches such as Fornell–Larcker but also more recently extend to the heterotrait–monotrait (HTMT) method with confidence intervals (Henseler et al., 2015; Franke and Sarstedt, 2019). At the same time, to increase variability in responses many measurement approaches apply seven- or ten-point scales and some even 100-point scales (Hair et al., 2019a; Sarstedt and Mooi, 2019).

Scholars continue to develop and refine scale development approaches and the following are some of the issues increasingly confronting measurement in marketing and particularly in branding research. Our comments are directly relevant for branding scale development but also for scale development in general. We focus on three areas, namely,

  1. Choosing a measurement model type;

  2. Measurement models in the era of archival data; and

  3. Confirming measurement models in branding research.

Choosing a measurement model type

To measure constructs such as brand equity, brand image or brand value, researchers typically have relied on common factor models. Common factor models assume that observed scores is an indicator stemming from two sources: the construct itself and measurement error. When this assumption is met, the common factor model is able to separate these two sources of variance (Rhemtulla et al., 2020). A popular, alternative to the common factor model is the composite model, which represents a construct developed by linear combinations of its indicators – either by using the means or sums of the indicators (Rigdon et al., 2019a) or by applying methods such as partial least squares (PLS-SEM; Lohmöller, 1989) or generalized structured component analysis (GSCA; Hwang and Takane, 2004), which weigh the indicators depending on their measurement error and their contribution in forming the measurement model (Hwang et al., 2019).

The controversy as to whether common factors or composites are better suited for approximating theoretical concepts has occupied generations of researchers (Hair et al., 2020; Velicer and Jackson, 1990; Widaman, 1993). Despite the typically marginal differences between the two approaches (Sarstedt et al., 2016), the existing methodological literature almost uniformly assumes the common factor model is the “true” one, whereas the composite model is not (Henseler et al., 2015).

Recent research casts doubt, however, on this dichotomy of opinions. For example, Rhemtulla et al. (2020) observe that:

[…] there is a growing appreciation within some areas of psychology that the latent variable model may not be the right model to capture relations between many psychological constructs and their observed indicators.

Similarly, in their discussion of common factor- and composite-based structural equation modeling methods, Rigdon et al. (2017, pp. 6-7) note, “the universal rejection of one method over the other is shortsighted as such a step necessarily rests on assumptions about unknown entities in a model and the parameter estimation.” In short, “researchers’ functional background and adherence to a specific position in the philosophy of science contribute to the confusion over which method is ‘right’ and which one is ‘wrong’.”

Extending these conceptual considerations, Rigdon et al. (2019b) analytically demonstrate that the use of common factors creates a band of uncertainty in the relationship between the construct and any variable outside the model – including the conceptual variable the construct seeks to represent (Steiger, 1979). In other words, this band of uncertainty creates a validity gap between the concepts and their measurement (i.e. the construct). This uncertainty is particularly pronounced when using only few indicators per construct, as is commonly the case in applications of common factor-based methods. Rigdon et al. (2020) argue that the increase in uncertainty has adverse consequences for the replicability of research findings and significantly contributes to the replication crisis witnessed in social science research (Camerer et al., 2018). These results do not imply that component-based methods are preferred over common factor-based methods per se, but they certainly cast doubt on the universal applicability of the common factor model (Hair and Sarstedt, 2019).

Measurement models in the era of archival data

The emergence of big data is revolutionizing the types of measurement models relevant for scholarly research, including branding. Big data typically come in the form of digital archival data and are, therefore, measured formatively. This contrasts with most previous survey research-based scaling measurement models, which were almost exclusively measured reflectively. Covariance-based SEM was the preferred method for development and confirmation of measurement models, but it is limited in its ability to assess formative measurement models (Hair and Sarstedt, 2019). This is a result of archival research data typically not following the rigorous psychometric standards required to achieve fit in a covariance-based SEM analysis. In contrast, PLS-SEM is appropriate for developing and confirming both reflective and formative measurement models.

As digital archival data becomes more prevalent, the application of composite-based SEM techniques such as PLS-SEM and GSCA to confirm formatively measured brand constructs will become more widespread (Hair et al., 2019a; Avkiran and Ringle, 2018). As one example, consider how the concept of brand engagement could be measured. Possible formative indicators for a brand engagement construct include the number of unique times individuals have clicked on, liked, commented on or shared posts during a specified period such as the last seven days. Since PLS-SEM and GSCA can easily execute and confirm formatively measured constructs, scholars will need to become more familiar with the development of constructs measured using this approach.

In addition, analysis of archival data typically follows an exploratory or prediction paradigm rather than a theory confirmation paradigm (Hair et al., 2019c). That is, unlike survey-based research, which is often used to confirm a well-developed theory, archival data applications are primarily used in exploratory research to propose causal relationships or to predict relevant outcome variables such as business performance. With their focus on prediction and the ability to assess a model’s predictive power, PLS-SEM and GSCA meet this requirement perfectly (Cho et al., 2019; Jöreskog and Wold, 1982; Shmueli et al., 2019). We expect its rapid application to accelerate further.

Confirming measurement models in branding research

Development and confirmation of measurement models have a long and rich history. The idea of focusing on the quality of measurement models emerged more than a century ago with what has been referred to as classical test theory (Spearman, 1904). This process was further developed by other social scientists as described by Hair et al. (2020). For many years, the most popular process for improving the quality of measurement models was exploratory factor analysis (EFA). In the early 1980s with the emergence of CB-SEM, a more rigorous theoretical approach identified as confirmatory factor analysis (CFA) was adopted.

Until recently, CFA was the primary approach used to develop and improve reflectively measured constructs based on the domain sampling model (Hair et al., 2019a). A recently proposed alternative approach that offers several advantages compared to CFA is confirmatory composite analysis (CCA; Hair et al., 2019a, Chapter 13; Henseler et al., 2015; Schuberth et al., 2018). CCA is a series of steps that can be executed with composite-based SEM methods such as PLS-SEM or GSCA. It can be used to confirm both reflective and formative measurement models of established measures that are being updated or adapted to a different context (Hair et al., 2020; Schuberth et al., 2018).

CCA differs from CFA in that the statistical objective is to maximize variance extracted from the exogenous variables, but in doing so, to facilitate prediction and confirmation of the endogenous constructs. That is, CCA enables researchers to develop exogenous and endogenous measures (scales) within a nomological network. The method produces composite scores that are weighted sums of indicators and can be used in follow-up analyzes. The resulting composites are correlated, however, as they would be in an oblique rotation with an EFA and include variance that maximizes prediction of the endogenous constructs. Note that the composite correlations from the oblique rotation seldom result in problems with multicollinearity (Cassel et al., 1999).

Researchers have proposed different approaches for running a CCA. Schubert et al.’s (2018) approach exclusively relies on tests of overall model fit and fit indices, similarly to the ones typically used in CFA. The purpose is to test “whether an artifact is useful” in a model in which it is linked to at least one composite or one other variable (Schuberth et al., 2018, p. 12). Hair et al. (2020) instead argue that a CCA should follow the classical model evaluation procedure documented in prior research – as proposed in the context of, for example, PLS-SEM (Hair et al., 2017). That is, researchers should first assess the standard PLS measurement model criteria for item reliability, internal consistency reliability, convergent validity and discriminant validity. If these metrics meet the recommended guidelines, the next step is to assess nomological validity by estimating the relationships of the newly generated or refined construct (s) with other constructs in the nomological net. The third and final CCA step is to assess the predictive validity of the structural relationships. Different from Schuberth et al. (2018), in Hair et al.’s (2020) approach, model fit indices play no role in light of conceptual concerns related to their applicability in a composite-based SEM context (Hair et al., 2019b).

To achieve measurement objectives in developing or adapting multi-item measures, researchers could use either CFA or CCA. However, the results are different and researchers need to understand the implications of the different outcomes to make informed decisions. CCA and CFA can both be used to improve item and scale reliability, identify and provide an indication of items that need to be revised or in some instances eliminated for content validity, facilitate achieving convergent validity and discriminant validity and to remove error variance.

There are several benefits of CCA compared to CFA. One, the number of items retained to measure constructs is higher with CCA, thus improving construct validity. Two, construct scores are available from CCA, whereas they are indeterminant in a CFA. Three, CCA can be applied to develop or revise both reflective and formative constructs, while CFA can only be used to develop or revise reflective measurement models. Finally, CCA uses total variance to develop composite-based proxies of conceptual variables while CFA includes only common variance when developing proxies (Rigdon et al., 2017). Consideration of these issues is important in the future direction of developing and confirming brand-related measurement models and for marketing measurement models in general.

Measuring and assessing brand-related issues

Despite an increasing interest in online brand advocacy (OBA) and the importance of online brand conversations, OBA’s conceptualization, dimensionality and measurement are unclear, resulting in confusion about the concept. This first paper in this special issue, “OBA: the development of a multiple-item scale” by Wilk, Soutar and Harrigan initially summarizes the procedures undertaken to develop and validate a practical and parsimonious, 18-item, four-dimensional OBA scale. The four dimensions of OBA that emerged included: brand defense, brand positivity, brand information sharing and virtual positive expression. The criterion-related validity of the OBA measure was demonstrated by examining the OBA construct’s relationship with some conceptually related variables (brand love, brand loyalty and intent to purchase). The three constructs were positively related to OBA and the four OBA dimensions provided more information than was obtained with a single OBA item. This article illustrates how the commonly used Churchill (1979) scale development process can be adapted to develop a scale for a new construct in an online context. The proposed OBA scale will be useful in many research contexts and facilitate useful managerial and research implications.

The next paper, “corporate social responsibility and business ethics: conceptualization, scale development and validation,” by Harrison, Hair, Ferrell and Ferrell develops and empirically validates scales to measure consumer expectations of business ethics and corporate social responsibility, previously measured as a single construct. A large number of scale items were generated through qualitative research. Initial item reduction was performed using a panel of experts and the further reduction was achieved with follow up quantitative assessment using EFA. The refined scales exhibited reliability, convergent validity, discriminant validity and external validity. Separation of these scales into two components will facilitate more precise examination of consumer perceptions of these two components of product and brand images and a better understanding of how they may impact brand attitudes and brand trust.

“Cognitive and emotional resistance to innovations: concept and measurement” by Castro, Zambaldi and Ponchio proposes and tests a scale to measure two dimensions of active innovation resistance (AIR). The two dimensions are cognitive active resistance and emotional active resistance. To test the proposed scale, three empirical studies were conducted. Reliability and validity of the AIR scale were assessed, including discriminant, convergent, nomological and criterion validity. In addition, the explanatory and predictive powers of the scale were examined. Addition of emotion as a component of AIR provided a more comprehensive understanding of brand adoption and rejection behavior, thereby expanding current knowledge of innovation-related, new product adoption and branding decisions.

The fourth paper in this issue, “discriminant validity of the customer-based corporate reputation scale: some causes for concern” by Radomir and Moisescu reviews the importance of assessing discriminant validity in assessing measurement scales. Data from the customer corporate reputation scale collected from two countries and two service industries were analyzed to demonstrate the limitations of the Fornell-Larcker criterion assessment of discriminant validity compared to the more recently proposed HTMT ratio of correlations inference test. The findings show that the customer-based corporate reputation scale, in both its original and short form, lacks discriminant validity when using the HTMT-based inference test. In contrast, the discriminant validity of the five corporate reputation dimensions is generally supported when using the more liberal Fornell-Larcker criterion. Thus, future studies employing the customer-based corporate reputation scale and similar branding scales should rely on the more stringent HTMT criterion to ensure discriminant validity.

“Brand love measurement scale development: an inter-cultural analysis” by Pontinha and Coelho do Vale proposes an integrative and updated framework of analysis of brand love. A new brand love measurement scale is developed that extends the conceptual framework for brand love across cultures and brands. EFA, CFA and (multi-group) structural equation modeling techniques were applied to assess the proposed model. The findings confirm that brand love is the result of a dynamic interaction among five complex, integrated emotional dimensions, which jointly form the brand love experience. The findings are relevant for both scholars and practitioners working on global brand understanding and management.

The sixth paper appearing in this issue is “consumer engagement in social media: scale comparison analysis” by Ferreira, Zambaldi and de Sousa Guerra. In this paper, the authors propose a procedure for the selection, standardization and comparison of consumer engagement scales. The research considers classical test and item response theories and examines 233 previously published studies. Guidelines are then provided that demonstrate the advantages, limitations and recommended applications of various consumer engagement scales.

In their research for “evaluation of brand relationship quality using formative index: a novel measurement approach,” Adhikari and Panda developed a parsimonious and robust formative index for evaluating and measuring the brand relationship quality of automobile brands. The findings demonstrated that the six indicators of the automobile brand relationship quality (ABRQ) index captured the conceptual domain of brand relationship quality. The ABRQ index can assist brand managers and academicians in benchmarking studies and market strategy formulation and extend the limited literature on brand relationship quality.

Wrapping up this special issue, “a history of brand misdefinition – with corresponding implications for mismeasurement and incoherent brand theory” by Gaski focuses on the longstanding problems of definition and conceptualization associated with the word “brand.” Some concerns and their troublesome implications are discussed and potential corrective actions proposed. Several conceptual and semantic issues surrounding the word “brand” as well as theoretical and practical difficulties resulting from the use and sometimes outright misuse are exposed and alternatives for resolving the confusing and even dysfunctional brand nomenclature are summarized. Overall, Gaski’s focus in this article is on strengthening the conceptual underpinnings of branding, something that is critical to what we do in brand measurement. Thus, we thought it apropos for Gaski to wrap up this rather intense set of brand measurement articles selected for the special issue.

As evident by this brief overview of the first seven articles in this special issue, measuring and assessing issues within brand management is a thriving and ongoing effort among marketing scholars. At the same time, however, Gaski in his wrap-up article reminds all of us of the perils of not understanding and capturing the true nature of the brand construct. We are confident that the articles in this special issue will trigger significant interest in brand measurement and inspire exciting follow-up research.

We would like to thank the Editors of Journal of Product and Brand Management, Francisco Guzmán and Cleopatra A. Veloutsou, for giving us the opportunity to edit this special issue. It was a long and arduous process for everyone, including the authors who stuck with us through numerous rounds of revisions. Importantly, we would like to thank the many reviewers, without whom this special issue would not have been possible. Many scholars had to work together to enable what we think is a powerful contribution to understanding brand measurement.

References

Aaker, D.A. (2004), “Leveraging the corporate brand”, California Management Review, Vol. 46 No. 3, pp. 6-18.

Böger, D., Kottemann, P. and Decker, R. (2018), “Parent brands’ influence on co-brand’s perception: a model-based approach”, Journal of Product & Brand Management, Vol. 27 No. 5, pp. 514-522.

Camerer, C.F., Dreber, A., Holzmeister, F., Ho, T.H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B.A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikensten, E., Hummer, L., Imai, T., Isaksson, S., Manfredi, D., Wagenmakers, E.J. and Wu, H. (2018), “Evaluating the replicability of social science experiments in nature and science between 2010 and 2015”, Nature Human Behaviour, Vol. 2 No. 9, pp. 637-644.

Cassel, C., Hackl, P. and Westlund, A.H. (1999), “Robustness of partial least-squares method for estimating latent variable quality structures”, Journal of Applied Statistics, Vol. 26 No. 4, pp. 435-446.

Chatterjee, D. Johnston, K. Green, D. Sobchuk, A. and Brrell, R. (2018), “Branding never sleeps: relentlessly measure, manage, and improve your brand”, available at: www.forrester.com/report/Branding±Never±Sleeps±Relentlessly±Measure±Manage±And±Improve±Your±Brand/-/E-RES77182 (accessed 5 February 2020).

Cho, G., Jung, K. and Hwang, H. (2019), “Out-of-bag prediction error: a cross validation index for generalized structured component analysis”, Multivariate Behavioral Research, Vol. 54 No. 4, pp. 505-513.

Franke, G. and Sarstedt, M. (2019), “Heuristics versus statistics in discriminant validity testing: a comparison of four procedures”, Internet Research, Vol. 29 No. 3, pp. 430-447.

Hair, J.F. and Sarstedt, M. (2019), “Composites vs. factors: implications for choosing the right SEM method”, Project Management Journal, Vol. 50 No. 6, pp. 1-6.

Hair, J.F., Howard, M.C. and Nitzl, C. (2020), “Assessing measurement model quality in PLS-SEM using confirmatory composite analysis”, Journal of Business Research, Vol. 109.

Hair, J.F., Hult, T., Ringle, C.M. and Sarstedt, M. (2017), A Primer on Partial Least Squares Structural Equation Modeling, Sage, Thousand Oaks.

Hair, J.F., Sarstedt, M. and Ringle, C.M. (2019b), “Rethinking some of the rethinking of partial least squares”, European Journal of Marketing, Vol. 53 No. 4, pp. 566-584.

Hair, J.F., Black Wc, Babin, B. and Anderson, R. (2019a), Multivariate Data Analysis, Cenage Learning, London.

Hair, J.F., Risher, J.J., Sarstedt, M. and Ringle, C.M. (2019c), “When to use and how to report the results of PLS-SEM”, European Business Review, Vol. 31 No. 1, pp. 2-24.

Henseler, J., Ringle, C.M. and Sarstedt, M. (2015), “A new criterion for assessing discriminant validity in variance-based structural equation modeling”, Journal of the Academy of Marketing Science, Vol. 43 No. 1, pp. 115-135.

Henseler, J., Dijkstra, T.K., Sarstedt, M., Ringle, C.M., Diamantopoulos, A., Straub, D.W., Hill, S., Ettenson, R. and Tyson, D. (2005), “Achieving the ideal Brand portfolio”, MIT Sloan Management Review, Vol. 46 No. 2, pp. 85-90.

Hwang, H. and Takane, Y. (2004), “Generalized structured component analysis”, Psychometrika, Vol. 69 No. 1, pp. 81-99.

Hwang, H., Sarstedt, M., Cheah, J.-H. and Ringle, C.M. (2019), “A concept analysis of methodological research on composite-based structural equation modeling: bridging PLSPM and GSCA”, Behaviormetrika.

Jöreskog, K.G. and Wold, H.O.A. (1982), “The ML and PLS techniques for modeling with latent variables: Historical and comparative aspects”, in Wold, H.O.A. and Jöreskog, K.G. (Eds), Systems under Indirect Observation, Part I, Amsterdam, pp. 263-270.

Keller, K.L., (2001), “Building customer-based brand equity”, Marketing Management, Vol. 10 No. 2. pp. 14-19.

Lohmöller, J.-B., (1989), Latent Variable Path Modeling with Partial Least Squares, Springer, Berlin.

Madden, T.J., Fehle, F. and Fournier, S. (2006), “Brands matter: an empirical demonstration of the creation of shareholder value through branding”, Journal of the Academy of Marketing Science, Vol. 34 No. 2, pp. 224-235.

Morgan, N.A. and Rego, L.L. (2009), “Brand portfolio strategy and firm performance”, Journal of Marketing, Vol. 73 No. 1, pp. 59-74.

Rhemtulla, M., van Bork, R. and Borsboom, D. (2020), “Worse than measurement error: consequences of inappropriate latent variable measurement models”, Psychological Methods, Vol. 25 No. 1.

Rigdon, E.E., Becker, J.-M. and Sarstedt, M. (2019a), “Parceling cannot reduce factor indeterminacy in factor analysis: a research note”, Psychometrika, Vol. 84 No. 3, pp. 772-780.

Rigdon, E.E., Becker, J.-M. and Sarstedt, M. (2019b), “Factor indeterminacy as metrological uncertainty: implications for advancing psychological measurement”, Multivariate Behavioral Research, Vol. 54 No. 3, pp. 429-443.

Rigdon, E.E., Sarstedt, M. and Becker, J.-M. (2020), “Quantifying uncertainty in behavioral research”, Nature Human Behaviour, Vol. 4 No. 4.

Rigdon, E.E., Sarstedt, M. and Ringle, C.M. (2017), “On comparing results from CB-SEM and PLS-SEM. Five perspectives and five recommendations”, Marketing ZFP, Vol. 39 No. 3, pp. 4-17.

Sarstedt, M. and Mooi, E.A. (2019), A Concise Guide to Market Research: The Process, Data, and Methods Using IBM SPSS Statistics, 3rd ed., Springer, Heidelberg.

Sarstedt, M., Hair, J.F., Ringle, C.M., Thiele, K.O. and Gudergan, S.P. (2016), “Estimation issues with PLS and CBSEM: where the bias lies!”, Journal of Business Research, Vol. 69 No. 10, pp. 3998-4010.

Shah, P. (2015), “Kill it or keep it? The weak brand retain-or-discard decision in Brand portfolio management”, Journal of Brand Management, Vol. 22 No. 2, pp. 154-172.

Shmueli, G., Sarstedt, M., Hair, J.F., Cheah, J.-H., Ting, H. and Ringle, C.M. (2019), “Predictive model assessment in PLS-SEM: guidelines for using PLSpredict”, European Journal of Marketing, Vol. 53 No. 11, pp. 2322-2347.

Steiger, J.H. (1979), “The relationship between external variables and common factors”, Psychometrika, Vol. 44 No. 1, pp. 93-97.

Varadarajan, R., DeFanti, M.P. and Busch, P.S. (2006), “Brand portfolio, corporate image, and reputation: managing Brand deletions”, Journal of the Academy of Marketing Science, Vol. 34 No. 2, pp. 195-205.

Velicer, W.F. and Jackson, D.N. (1990), “Component analysis versus common factor analysis: some issues in selecting an appropriate procedure”, Multivariate Behavioral Research, Vol. 25 No. 1, pp. 1-28.

Veloutsou, C. and Guzmán, F. (2017), “The evolution of Brand management thinking over the last 25 years as recorded in the journal of product and Brand management”, Journal of Product & Brand Management, Vol. 26 No. 1, pp. 2-12.

VisionEdge Marketing (2020), “Metrics for measuring brand marketing effectiveness”, available at: https://visionedgemarketing.com/metrics-measuring-brand-marketing-effectiveness/ (accessed 5 February 2020).

Whitler, K.A. and Regan, E. (2019), Brand Measurement Methods, University of VT, Darden Business Publishing, Charlottesville, VA.

Widaman, K.F. (1993), “Common factor analysis versus principal component analysis: differential bias in representing model parameters?”, Multivariate Behavioral Research, Vol. 28 No. 3, pp. 263-311.

Further reading

Hwang, H. and Takane, Y. (2014), Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling, Chapman and Hall/CRC, Boca Raton, FL.

Ketchen, D.J., Hair, J.F., Hult, G.T.M. and Calantone, R.J. (2014), “Common beliefs and reality about partial least squares: comments on Rönkkö & Evermann (2013)”, Organizational Research Methods, Vol. 17 No. 2, pp. 182-209.

Related articles