An assessment of the use of partial least squares structural equation modeling (PLS-SEM) in hospitality research

Faizan Ali (College of Hospitality and Tourism Leadership, University of South Florida - Sarasota-Manatee, Sarasota, Florida, USA)
S. Mostafa Rasoolimanesh (Housing, Building and Planning School, Universiti Sains Malaysia, Penang, Malaysia)
Marko Sarstedt (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg, Magdeburg, Germany, and University of Newcastle, Callaghan, Australia)
Christian M. Ringle (Faculty of Management Science and Technology, Hamburg University of Technology (TUHH), Hamburg, Germany)
Kisang Ryu (Department of Food Service Management, Sejong University, Seoul, Republic of Korea)

International Journal of Contemporary Hospitality Management

ISSN: 0959-6119

Publication date: 8 January 2018

Abstract

Purpose

Structural equation modeling (SEM) depicts one of the most salient research methods across a variety of disciplines, including hospitality management. Although for many researchers, SEM is equivalent to carrying out covariance-based SEM, recent research advocates the use of partial least squares structural equation modeling (PLS-SEM) as an attractive alternative. The purpose of this paper is to systematically examine how PLS-SEM has been applied in major hospitality research journals with the aim of providing important guidance and, if necessary, opportunities for realignment in future applications. Because PLS-SEM in hospitality research is still in an early stage of development, critically examining its use holds considerable promise to counteract misapplications which otherwise might reinforce over time.

Design/methodology/approach

All PLS-SEM studies published in the six SSCI-indexed hospitality management journals between 2001 and 2015 were reviewed. Tying in with the prior studies in the field, the review covers reasons for using PLS-SEM, data characteristics, model characteristics, the evaluation of the measurement models, the evaluation of the structural model, reporting and use of advanced analyses.

Findings

Compared to other fields, the results show that several reporting practices are clearly above standard but still leave room for improvement, particularly regarding the consideration of state-of-the art metrics for measurement and structural model assessment. Furthermore, hospitality researchers seem to be unaware of the recent extensions of the PLS-SEM method, which clearly extend the scope of the analyses and help gaining more insights from the model and the data. As a result of this PLS-SEM application review in studies, this research presents guidelines on how to accurately use the method. These guidelines are important for the hospitality management and other disciplines to disseminate and ensure the rigor of PLS-SEM analyses and reporting practices.

Research limitations/implications

Only articles published in the SSCI-indexed hospitality journals were examined and any journals indexed in other databases were not included. That is, while this research focused on the top-tier hospitality management journals, future research may widen the scope by considering hospitality management-related studies from other disciplines, such as tourism research or general management.

Originality/value

This study contributes to the literature by providing hospitality researchers with the updated guidelines for PLS-SEM use. Based on a systematic review of current practices in the hospitality literature, critical methodological issues when choosing and using the PLS-SEM were identified. The guidelines allow to improve future PLS-SEM studies and offer recommendations for using recent advances of the method.

Keywords

Citation

Ali, F., Rasoolimanesh, S., Sarstedt, M., Ringle, C. and Ryu, K. (2018), "An assessment of the use of partial least squares structural equation modeling (PLS-SEM) in hospitality research", International Journal of Contemporary Hospitality Management, Vol. 30 No. 1, pp. 514-538. https://doi.org/10.1108/IJCHM-10-2016-0568

Download as .RIS

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited


Introduction

Structural equation modeling (SEM; Rigdon, 1998) is one of the most salient research methods across a variety of disciplines. Its ability to simultaneously examine a series of interrelated dependence relationships between sets of constructs represented by multiple variables while accounting for measurement error has contributed to the SEM’s widespread application. SEM is a general term encompassing a variety of statistical models, of which covariance-based SEM (CB-SEM) is the most prominent. As Chin (1998b, p. 295) points out, “to many social science researchers, the covariance-based procedure is tautologically synonymous with the term SEM”. This view of SEM is, however, very limited given that variance-based SEM constitutes a second string in SEM, which is growing in popularity among business researchers. Several variance-based SEM techniques, such as generalized structured component analysis (Hwang et al., 2010) and path analysis, have been introduced (Lastovicka and Thamodaran, 1991). By far the most prominent technique, however, is partial least squares structural equation modeling (PLS-SEM), which has experienced increasing prominence in a variety of fields, including, for example, international management (Richter et al., 2016b), marketing (Hair et al., 2012b), strategic management (Hair et al., 2012a) and tourism (do Valle and Assaker, 2016).

PLS-SEM applies to the same class of models as CB-SEM – structural equations with unobservable variables and measurement error – but has different characteristics and objectives (Richter et al., 2016a). Most notably, PLS-SEM focuses on the explanation of variances (rather than covariances), making it a prediction-oriented approach to SEM (Shmueli et al., 2016). Unlike CB-SEM, which considers constructs as common factors that explain the covariation between the associated indicators, PLS-SEM is a composite-based approach to SEM that uses linear combinations of indicator variables as proxies of the conceptual variables under investigation to explain the variance of the target constructs in the structural model (Rigdon et al., 2017). Owing to its statistical features, the application of the PLS-SEM method is particularly promising when the premises of CB-SEM are violated (e.g. in terms of distributional assumptions), the assumed cause-and-effect relationships are not sufficiently explored, and when the model comprises very complex models with many (formatively or reflectively measured) constructs (Hair et al., 2017a; Rigdon, 2016; Sarstedt et al., 2016b).

Reaping the benefits of PLS-SEM, however, requires researchers to have a thorough understanding of the method’s properties. This is especially important, because methodological research in PLS-SEM is very dynamic in terms of evaluating the method’s characteristics (Henseler et al., 2014; Rigdon, 2016; Rönkkö and Evermann, 2013) and developing extensions that increase its usefulness for applied researchers (Becker and Ismail, 2016; Hair et al., 2018; Nitzl et al., 2016). Because PLS-SEM in hospitality research is on the rise of development, examining its use critically is very promising in terms of counteracting misapplications, which might otherwise be reinforced over time. Against this background, we systematically examine how PLS-SEM has been applied in major hospitality research journals with the aim of providing important guidance and, if necessary, opportunities for realignment in future applications.

The results of our review of PLS-SEM studies show that, in comparison to other fields, several reporting practices are clearly above standard, but still leave room for improvement, particularly with regard to considering the state-of-the art metrics for measurement and structural model assessment. Hospitality researchers, however, seem unaware of the recent extensions of the PLS-SEM method, which clearly extend the scope of the analyses and help generate more insights from the model and the data. Consequently, this research provides guidelines and recommendations on how to use the PLS-SEM method properly to ensure the rigor of research and publication practices in the hospitality management discipline.

PLS-SEM studies in hospitality research: journals and themes

This research reviews the application of PLS-SEM published in the six journals covering the hospitality management discipline included in Web of Science’s Social Sciences Citation Index, i.e. Cornell Hospitality Quarterly, International Journal of Contemporary Hospitality Management, International Journal of Hospitality Management, Journal of Hospitality and Tourism Research, Journal of Hospitality, Leisure, Sport & Tourism Education and Scandinavian Journal of Hospitality and Tourism. All the issues of these six journals published between January 2001 and December 2015 were examined for empirical applications of PLS-SEM. A full-text search was conducted on each journal’s website by means of keywords such as partial least squares, path modeling, PLS and PLS-SEM. This search yielded a total of 29 empirical articles that applied PLS-SEM. The breakdown of studies by year (Figure 1) shows that PLS-SEM use has recently gained momentum in hospitality research. Whereas seven (24.14 per cent) of the papers were published between 2001 and 2012, 22 (75.86 per cent) were published during the three years from 2013 to 2015. This result is in line with related studies in strategic management (Hair et al., 2012a) and international management (Richter et al., 2016b), which also identified a growing use of PLS-SEM, albeit earlier and not so rapidly.

Of the reviewed journals, the International Journal of Hospitality Management published the highest number of studies (n = 14; 48.28 per cent), followed by the International Journal of Contemporary Hospitality Management (n = 6; 20.69 per cent), and, with four studies (13.79 per cent) each, the Cornell Hospitality Quarterly and the Journal of Hospitality and Tourism Research. The Journal of Hospitality, Leisure, Sports and Tourism Education published only 1 (3.45 per cent), while the Scandinavian Journal of Hospitality & Tourism did not publish any PLS-SEM studies (Table I).

Guidelines for PLS-SEM analysis and results reporting

Following prior reviews in related fields (do Valle and Assaker, 2016; Kaufmann and Gaeckler, 2015; Nitzl, 2016) and guidelines for PLS-SEM use (Chin, 2010; Garson, 2016; Hair et al., 2017b), we analyzed the 29 studies included in our review in terms of the following issues, which are all relevant for a valid application of the PLS-SEM method:

  • reasons for using PLS-SEM;

  • data characteristics;

  • model characteristics;

  • evaluation of the measurement models;

  • evaluation of the structural model; and

  • reporting and advanced analyses.

Critical issues in PLS-SEM application

Reasons for using PLS-SEM

Even though PLS-SEM is methodologically well-established and frequently applied, editors and reviewers often require researchers to justify their choice of the method (Chin, 2010). Table II highlights their main reasons for doing so in hospitality research, which include the use of small sample sizes (nine studies, 31.03 per cent) or non-normal data (nine studies, 31.03 per cent), the research’s aim to develop theory (seven studies, 24.14 per cent) and the inclusion of formatively measured constructs and/or single-item constructs in the model (five studies, 17.24 per cent). Further reasons include the estimation of complex models (four studies, 13.79 per cent), running a moderation analysis (three studies, 10.34 per cent) and the study’s focus on prediction (two studies, 6.90 per cent). Finally, four of the 29 studies (13.79 per cent) did not provide any justification for using PLS-SEM.

In light of the increasing dissemination of PLS-SEM in the social sciences, several researchers have critically revisited commonly stated reasons for using the method (Henseler et al., 2014; Rigdon, 2016; Sarstedt et al., 2016b). Jointly, these studies suggest that prediction orientation, high model complexity and the use of formatively measured constructs clearly support the use of PLS-SEM. However, there is no justification for viewing PLS-SEM as a panacea for handling non-normal data and, especially, small sample sizes. For example, Hair et al. (2013, p. 2) note that “some researchers abuse this advantage by relying on extremely small samples relative to the underlying population” and that the method “has an erroneous reputation for offering special sampling capabilities that no other multivariate analysis tool has”. PLS-SEM converges with smaller samples in many instances when CB-SEM fails, but the legitimacy of such analyses depends on the size and the nature of the population (e.g. in terms of its heterogeneity). No statistical method, including PLS-SEM, can offset a badly designed sample (Sarstedt et al., 2017a). Similarly, when members of a population are highly homogeneous with regard to the evaluation of the constructs, there is no compelling need for high sample sizes. In fact, by using more elaborate sampling techniques, such as stratified sampling, researchers can reduce the minimum sample size required for making valid inferences from the data (Cochran, 1977). Nevertheless, such use must still consider the PLS-SEM method’s minimum sample size requirements for obtaining valid results.

Data characteristics

Given that several studies justify their use of PLS-SEM on the grounds of small sample sizes, it is not surprising that seven studies (24.14 per cent) draw on samples with fewer than 150 observations. Although such sample sizes are not problematic per se, researchers should carefully check the adequacy of their analyses by means of power analyses, which neither of the studies did. Alternatively, researchers can draw on Kock and Hadaya’s (2018) inverse square root and gamma-exponential methods to determine the minimum sample size in PLS-SEM applications. Because the application of PLS-SEM in management information systems, marketing and strategic management often builds on very small sample sizes, which has frequently been criticized (Hair et al., 2012a; Hair et al., 2012b; Ringle et al., 2012), we positively note that none of the studies reviewed in hospitality research uses less than 100 observations. Relatedly, the average sample size of the studies included in our review is large (mean = 332, median = 382) compared to those in the management information systems (mean = 238, median = 198), marketing (mean = 211, median = 159) and particularly strategic management (mean = 155, median = 83). These results suggest that hospitality researchers are more sensitive to sample size issues, which have been heatedly debated in the context of PLS-SEM (Goodhue et al., 2012; Hair et al., 2017d; Marcoulides et al., 2012). Given these comparably high sample sizes, hospitality researchers can (and should) engage in holdout sample validation to assess their models’ out-of-sample predictive power, which none of the studies did.

Although several studies explicitly referred to the PLS-SEM’s suitability for handling skewed data, only seven studies (24.14 per cent) reported the extent to which the data were non-normal. Although prior studies have found PLS-SEM to be robust in instances of extremely non-normal data (Cassel et al., 1999; Hair et al., 2017d), researchers are advised to take the distribution of data into consideration. In combination with a small sample size, highly skewed data can result in inflated standard bootstrap errors, which reduce the statistical power (Hair et al., 2017b). Hence, researchers should assess the degree to which data are non-normal, using standard normality tests (e.g. the Kolmogorov–Smirnov test and the Shapiro–Wilk test), as well as report skewness and kurtosis measures.

Model characteristics

Table III provides an overview of the model characteristics of the PLS-SEM studies included in our review. The average number of latent variables per model is 7.03, which is similar to PLS-SEM applications in other disciplines (do Valle and Assaker, 2016; Hair et al., 2012b; Kaufmann and Gaeckler, 2015), but considerably larger than the average complexity of CB-SEM models used in, for example, marketing (Steenkamp and Baumgartner, 2000) and strategic management (Ketchen and Shook, 1996). The same holds for the average total number of indicators per model (24.69). These results suggest that the use of PLS-SEM in hospitality research involves relatively complex models.

Our review also discloses that hospitality researchers mainly use PLS-SEM to estimate models that only include reflectively measured constructs (18 studies, 62.07 per cent), or a combination of reflectively and formatively measured constructs (11 studies, 37.93 per cent). However, most of the studies (23 studies, 79.31 per cent) did not discuss or justify the mode of measurement model specification. Given the extensive debate on measurement model misspecification (Aguirre-Urreta and Marakas, 2012; Jarvis et al., 2003, 2012), this finding is particularly critical and should be an important improvement area in future studies. The same holds for the single-item constructs, which researchers should use with great caution (Diamantopoulos et al., 2012). Sarstedt et al. (2016a, p. 3202) note that “researchers must accept the fact that single items have lower predictive power, which can easily trigger type II errors. Because null results face a higher barrier to publication than those that yield statistically significant differences, the use of single items can easily backfire on researchers”. This particularly holds for a prediction-oriented approach to SEM such as PLS. Six studies in our review (20.69 per cent) used at least one single-item construct in their PLS path model, suggesting that hospitality researchers are therefore not fully aware of the critical issues regarding their use.

Results evaluation

The assessment of PLS-SEM results involves a two-step approach:

  1. the evaluation of the measurement models; and

  2. the assessment of the structural model (Chin, 2010; Hair et al., 2017b).

The measurement model assessment involves the evaluation of construct measures’ reliability and validity. This assessment draws on different measures, depending on whether a construct is measured reflectively or formatively.

Reflective measurement model evaluation.

The assessment of reflective measurement models involves evaluating the measures’ reliability (i.e. indicator reliability and internal consistency reliability) and the validity (i.e. convergent and discriminant validity). The indicator loadings should be larger than 0.7 to ensure indicator reliability. To establish internal consistency reliability, Cronbach’s alpha and composite reliability (CR) should be higher than the threshold of 0.7. The average variance extracted (AVE), which should be larger than 0.5, allows assessing convergent validity. Moreover, instead of using traditional methods for discriminant validity assessment, such as cross loadings and the Fornell–Larcker criterion (Fornell and Larcker, 1981), researchers should apply the heterotrait–monotrait (HTMT) criterion (Henseler et al., 2015).

Table IV shows our findings in respect of the studies in our review. Almost all of the 29 studies that included reflectively measured constructs report the indicator loadings (27 studies, [93.10 per cent]). Additionally, 23 studies (79.31 per cent) reported the CR, whereas 14 studies (48.28 per cent) reported Cronbach’s alpha. Of these studies, a total of 11 studies (37.93 per cent) reported CR and Cronbach’s alpha, while three studies (10.34 per cent) did not report either of these measures. Also, 23 studies (79.31 per cent) used the AVE to establish convergent validity. The remaining studies either (incorrectly) assessed the indicator loadings’ significance as evidence of convergent validity, or ran an exploratory factor analysis prior to the PLS-SEM analysis. To assess discriminant validity, 23 studies (79.31 per cent) used the Fornell–Larcker criterion, while five studies (17.24 per cent) only interpreted the cross loadings; the remaining study did not assess discriminant validity. This result is disconcerting, as recent research shows that neither approach can reliably detect discriminant validity issues (Henseler et al., 2015). Cross loadings specifically fail to indicate a lack of discriminant validity, even when two constructs are perfectly correlated, rendering this criterion’s use ineffective for empirical research. Hospitality researchers should instead draw on the HTMT criterion, which is computed as the mean of all the correlations of the indicators measuring different constructs, relative to the geometric mean of the average correlations of the indicators measuring the same construct. An HTMT value significantly different from one indicates discriminant validity.

Formative measurement model evaluation.

Overall, 11 of the 29 studies (37.93 per cent) contained at least one formatively measured construct. Indicator weights, the most common criterion for assessing formative measures, were reported in only two of these 11 studies (18.18 per cent). These two studies also assessed the indicator weights’ significance by using bootstrapping. Multicollinearity between indicators is a significant issue in formative measurement models, because high levels of collinearity could lead to unstable indicator weights and inflated standard errors, thereby triggering type II errors (Cenfetelli and Bassellier, 2009). However, only four studies (36.36 per cent) assessed the multicollinearity on the grounds of the variance inflation factor (VIF; three studies) or the tolerance (one study).

These findings on the measurement model assessments are somewhat concerning. Although researchers routinely use standard convergent and internal consistency reliability measures, their discriminant validity assessment relies on measures that recent research has concluded are ineffective for empirical research. As a consequence, when it comes to discriminant validity, “researchers cannot be certain results confirming hypothesized structural paths are real or whether they are a result of statistical discrepancies” (Farrell, 2010, p. 324). Furthermore, hospitality researchers have largely ignored standard model evaluation criteria for formatively measured constructs. However, they are important, as structural model estimates’ quality depends on the validity of the (formative) measurement models. Furthermore, multicollinearity issues not addressed in the measurement model assessment render the interpretation of the indicators’ relative contribution to the construct largely meaningless.

In light of the increasing interest in formative measurement in general (Bollen and Diamantopoulos, 2017; Diamantopoulos, 2008), we expect it to play a greater role in future hospitality research. Researchers working in the field should therefore keep recent guidelines on formative measurement model evaluation in mind (Hair et al., 2017c). These guidelines comprise the assessment of:

  • convergent validity by using redundancy analysis;

  • indicator multicollinearity; and

  • the indicators’ relative and absolute contribution to the construct.

Structural model evaluation.

Whereas practically all studies report the path coefficients’ values (28 studies; 96.55 per cent), very few do so with regard to their significance (Table V). Because PLS-SEM does not assume a specific data distribution, significance testing requires researchers to derive the parameters’ standard errors via resampling techniques such as bootstrapping or blindfolding. The resulting test statistic follows a t distribution under the null hypothesis of no effect, except for the (admittedly very rare) case of a two-construct model with zero relationship (Rönkkö and Evermann, 2013). In this case, the distribution is bimodal, which, however, has no consequences for inference testing with regard to type I or type II errors (Henseler et al., 2014)[1]. Overall, only 16 studies (55.17 per cent) indicated the path coefficients’ significance by reporting p or t-values. Despite frequent calls in the literature (Streukens and Leroi-Werelds, 2016), no study reported bootstrap-based confidence intervals to further assess the parameter estimates’ stability. Future studies in the field should consider reporting bias-corrected and accelerated (BCa) bootstrap confidence intervals, which recent research has shown to perform well in terms of coverage (i.e. the proportion of times the population value of the parameter is included in the 1-α per cent confidence interval in repeated samples) and balance (i.e. how αper cent of cases fall to the right or to the left of the interval; Aguirre-Urreta and Rönkkö, 2018; Streukens and Leroi-Werelds, 2016). In doing so, researchers should improve their reporting practices of bootstrap settings. Only 12 studies (41.38 per cent) commented on the number of bootstrap samples, which is, however, crucial for assessing the soundness of results. The mean number of the bootstrap samples was 1,366.67, which is far below the literature’s current recommendation of 10,000 (Streukens and Leroi-Werelds, 2016). Similarly, none of the studies commented on sign changes options, despite recent warnings about the deteriorating effect of sign change corrections in bootstrapping (Rönkkö et al., 2015).

A total of 24 of the 29 studies (82.76 per cent) reported the coefficient of determination R2, five of which also assessed the relative impact of predictor constructs on the R2 by means of the f2 effect size. Although this result appears laudable, recent research emphasizes the limitations of the R2 metric regarding evaluating a model’s predictive performance (Shmueli et al., 2016). Specifically, although the R2 allows appreciation of a model’s in-sample prediction, it, does not capture the out-of-sample predictive performance. Shmueli et al. (2016, p. 4553) point out that “fundamental to a proper predictive procedure is the ability to predict measurable information on new cases”. Seven studies (24.14 per cent) reported the Stone–Geisser index (Q2), which several studies have proposed as a way to determine out-of-sample prediction (Rigdon, 2012; Sarstedt et al., 2014). However, because the Q2 only omits and imputes single data points rather than entire observations, the measure can only be partly considered a measure of out-of-sample prediction. Shmueli et al. (2016) developed the PLSpredict procedure as a solution to generate holdout sample-based point predictions regarding the item or construct level in PLS path models, which hospitality researchers should consider in future studies.

Finally, neither study engages in any type of model fit assessment, perhaps because methodological researchers have only recently started proposing goodness-of-fit measures in a PLS-SEM context (Dijkstra and Henseler, 2015a; Henseler et al., 2014). Examples of such assessments include the standardized root mean square residual (SRMR), the root mean square residual covariance (RMStheta) and the exact model fit test (Dijkstra and Henseler, 2015a; Lohmöller, 1989). However, researchers have long questioned the universal applicability of such goodness-of-fit measures in a PLS-SEM context (Hair et al., 2017a; Lohmöller, 1989). Because parameter estimation in PLS-SEM does not aim to explicitly minimize the divergence between the empirical and model-implied covariance matrices, assessing the fit of the model on the grounds of this divergence seems inappropriate. At the same time, however, hospitality researchers typically not only use PLS-SEM for prediction, but seek to confirm or reject hypotheses that relate to the relationships between constructs; that is, in line with Jöreskog and Wold (1982, p. 270), they use PLS-SEM as a “causal-predictive” technique. Hence, validation using goodness-of-fit measures is also relevant in PLS-SEM, but current metrics do affiliate with the PLS-SEM algorithm’s statistical working principles.

Reporting and advanced analyses using PLS-SEM

With the increasing dissemination of PLS-SEM in social sciences, methodological research has provided various extensions that allow for a more nuanced assessment of data and model relationships. However, our review shows that only few studies in the hospitality research made use of these advanced techniques. For example, 10 studies (34.48 per cent) ran moderation analyses, but without clearly documenting their implementation regarding the data standardization, or generation of the interaction term (Henseler and Chin, 2010). One study (Lai, 2015) used CB-SEM for the model assessment, but PLS-SEM for the moderator analysis. Owing to the methods’ different algorithmic handling of construct measures, such an approach confuses the measurement philosophies underlying factor-based and composite-based SEM (Rigdon et al., 2017), which can induce significant biases in parameter estimation (Sarstedt et al., 2016b). Six studies (20.69 per cent) considered a moderating effect using a multigroup analysis (Matthews, 2018), without, however, testing for measurement invariance. The latter is a fundamental requirement and needs to be (at least partially) established before comparing parameter estimates across different groups of data (Henseler et al., 2016b). Only nine studies (31.03 per cent) explicitly tested for mediating effects, even though most of the models included a series of indirect effects that would allow such an assessment. Understanding these effects is important, as they shed light on the mechanisms of how antecedent constructs unfold their influence on target constructs. Researchers wishing to run a mediation analysis should draw on Zhao et al.’s (2010) procedure and compare direct and indirect effects using bootstrapping, rather than using Baron and Kenny’s (1986) approach in conjunction with the Sobel test (Nitzl et al., 2016). Finally, nine studies (31.03 per cent) considered higher-order constructs, which can reduce the model complexity.

These results indicate that hospitality researchers have not yet fully exploited PLS-SEM’s potential regarding advanced analysis techniques, therefore missing opportunities to improve their analyses, as well as supplement and corroborate their findings. In particular, hospitality researchers should test for unobserved heterogeneity, which, if present can seriously distort the results (Becker et al., 2013)—see Sarstedt et al. (2016c, 2017b) for recent guidelines on how to run corresponding analyses in a PLS-SEM context. Researchers should also take measures to diagnose and control for common method variance. Chin et al. (2013) proposed the measured latent marker variable approach, which, however, has not been systematically evaluated, thus providing scope for research in this regard. Note, however, that there is still significant disagreement regarding common method variance’s relevance. For example, whereas Min et al. (2016) empirically demonstrate the deleterious effect of common method variance on structural model parameter estimates (also see Gould et al., 2011), Fuller et al.’s (2016) simulation study shows that common method variance only induces significant parameter biases when:

  • measures have atypically high or low levels of internal consistency reliability; and

  • the degree of common method variance is very high.

Following these results, common method variance should not be a serious issue when model constellations normally encountered in PLS-SEM applications. Similarly, methodological research on PLS-SEM has suggested initial approaches for handling endogeneity (Benitez et al., 2016), longitudinal data analysis (Roemer, 2016) and multilevel modeling (Hwang et al., 2007). Although these approaches are still in their early stages of development, future hospitality research studies should exploit and, if possible, further develop these techniques.

Finally, owing to the different default settings when running PLS-SEM analyses, researchers should also report the software used, which 21 (72.41 per cent) of the studies did. The most commonly used software is SmartPLS (13 studies), followed by PLSGraph (six studies), MATLAB and XLSTAT-PLSPM (one study each).

Conclusion and future research

Discussions on PLS-SEM have increased as rapidly as the contributions that use or advance the method. Researchers opposing the use of PLS-SEM (Rönkkö et al., 2016; Rönkkö and Evermann, 2013) argue that it is not a (factor-based) latent variable method, lacks goodness-of-fit measures and produces biased parameter estimates (Rönkkö et al., 2016; Rönkkö and Evermann, 2013; Rönkkö et al., 2015). Critics of this position note that such conclusions ignore the fundamental working principles of PLS-SEM, which:

Hence, using factor-based methods, such as CB-SEM, as a standard of comparison is “misleading and misguided, capable of generating both false confidence and false concern” (Rigdon et al., 2017). Instead, PLS-SEM should distance itself from the factor-analytic tradition and focus on developing itself as a purely composite-based statistical methodology (McIntosh et al., 2014). Alternatively, researchers may use consistent PLS (Dijkstra, 2014; Dijkstra and Henseler, 2015b), which “corrects” PLS-SEM-based parameter estimates, allowing them to resemble a factor model. However, such an analysis is based on the implicit assumption that factor-based SEM is the correct estimator that delivers true results as a SEM benchmark, which several researchers view critically (Henseler et al., 2014; Rigdon, 2012; Rigdon et al., 2017). For example, Hair et al. (2017a, p. 443) note, “Indeed, scholars could just as easily label differences in loadings and path coefficients as deficiencies of CB-SEM because the loadings are in general lower than for PLS-SEM and the coefficients are somewhat higher. Basically, the authors make the same mistake as many CB-SEM scholars when they assume that the common factor model is the benchmark against which PLS-SEM should be compared – a situation referred to as PLS bias or consistency at large, which in fact is not necessarily a bias”. Hospitality researchers should follow this recent debate when choosing an appropriate SEM method and should preempt potential criticism when using PLS-SEM or CB-SEM by highlighting the divergent positions.

When choosing PLS-SEM, our review of studies published in a 15-year period between 2001 and 2015 in the six leading hospitality management journals indicates an increasing dissemination of the method in the field. Our review of these studies and its comparison with corresponding research in other fields (Table VI) offer a range of important insights. Hospitality researchers seem to be aware of sample size issues in PLS-SEM, which have attracted considerable attention in recent years (Goodhue et al., 2012; Marcoulides et al., 2012; Rönkkö and Evermann, 2013). Although numerous studies in other disciplines draw on data with fewer than 100 observations, this is not the case in hospitality research where the minimum (and average) sample sizes are considerably higher. In addition, the reporting practices regarding the assessment of reflective measurement models are clearly above standard, but can nevertheless be improved. This is particularly true regarding discriminant validity assessment, which draws on metrics that recent research has debunked as ineffective in a PLS-SEM context (Henseler et al., 2015). Similarly, structural model assessment practices compare well with those in other fields, but should consider more recent metrics that allow for assessing a model’s out-of-sample predictive power (Shmueli et al., 2016). However, other aspects, such as formative measurement model assessment, clearly require improvement. Hospitality researchers disregard fundamental validation steps such as convergent validity and multicollinearity assessment.

To initiate a widely adaptable improvement for using the method properly in studies, Table VII presents guidelines for PLS-SEM use in hospitality management and other fields. These guidelines reflect the issues covered in our review that are relevant regarding the correct application of PLS-SEM and also include the advanced analysis techniques that hospitality researchers have used (i.e. moderation analysis, mediation analysis, multigroup analysis and higher-order constructs). We expect that these guidelines will improve the rigor of PLS-SEM research and publication practices in the hospitality management discipline. Moreover, we assume that these guidelines will provide hospitality researchers with the direction they need to exploit the advantages of the PLS-SEM method more extensively for the wide area of potential research applications in their field.

Apart from providing insights into reporting practices, our review has also shown that hospitality researchers seem unaware of recent advances in the field. These complementary analysis techniques clearly extend the scope of the analyses and help researchers gain more insights from the model and the data. Extensions include, but are not limited to, the weighted PLS algorithm (Becker and Ismail, 2016), consistent PLS (Bentler and Huang, 2014; Dijkstra and Henseler, 2015b), methods for uncovering unobserved heterogeneity (Becker et al., 2013; Ringle et al., 2014) and impact-performance map analyses (Ringle and Sarstedt, 2016)–see Hair et al. (2018) for an overview.

Although the review guidance for future PLS-SEM use in hospitality management research, our research is not without limitations. Our review of PLS-SEM use in top-tier hospitality management journals comprised 29 studies, which is a relatively small number compared to other disciplines (e.g. 204 in marketing; Hair et al., 2012b). Although our research focused on the top-tier hospitality management journals, future research may widen the scope by considering hospitality management-related studies in other disciplines, such as tourism research, or general management. We also invite researchers to critically revisit the guidelines based on our review. Although these guidelines reflect the state-of-the art in PLS-SEM use, other recommendations might emerge when applying consistent PLS (Henseler et al., 2016a). Finally, given the high number of PLS-SEM review studies in various fields, future research could meta-analyze these studies to gain a full understanding of how social science researchers have used the method.

Figures

Number of PLS-SEM publications in hospitality journals over time

Figure 1.

Number of PLS-SEM publications in hospitality journals over time

Publications using PLS-SEM in hospitality journals

Journals (number of publications) Studies
International Journal of Hospitality Management (14) Cohen and Olsen (2013), Deng et al. (2013), Gallarza et al. (2015), Kang et al. (2015), King et al. (2013), Ku et al. (2011), Loureiro (2014), Loureiro et al. (2013), Loureiro and Kastenholz (2011), Prud’homme and Raymond (2013), Qiu et al. (2015), Ruizalba et al. (2014), Úbeda-García et al. (2014), Wendy Gao and Lai (2015)
International Journal of Contemporary Hospitality Management (6) Castellanos-Verdugo et al. (2009), Kim et al. (2013b), King (2010), Pavlatos (2015), Šeric et al. (2015), So and King (2010)
Cornell Hospitality Quarterly (4) Aslanzadeh and Keating (2014), Beldona et al. (2015), Frías-Jamilena et al. (2013), Lai (2015)
Journal of Hospitality and Tourism Research (4) Kang et al. (2014), Kim et al. (2013a), Sui and Baloglu (2003), Wu et al. (2014)
Journal of Hospitality, Leisure, Sports and Tourism Education (1) Eurico et al. (2015)
Scandinavian Journal of Hospitality & Tourism (0) No studies conducted

Reasons for using PLS-SEM in hospitality research

Reasons for using PLS-SEM Frequency (n = 29) (%)
Sample size 9 31.03
Non-normal data 9 31.03
Theory development 7 24.14
Formatively measured constructs and/or single
constructs
5 17.24
Model complexity 4 13.79
Mediator analysis 9 31.03
Moderator analysis 3 10.34
Prediction 2 6.90
Not mentioned 4 13.79

Descriptive statistics of sample and model characteristics

Criterion Frequency
(n = 29)
(%)
Sample characteristics
Sample size
Mean 332
Median 382
Range 106 - 1,500
With less than 100 observations
With less than 150 observations
0
7
0
24
Model characteristics
Number of latent variables
Mean 7.03
Median 7.00
Range 3-20
Number of indicators
Mean 24.69
Median 22.00
Range 12-78
Mode of measurement
Only reflective 18 62.07
Only formative 0 0
Reflective and formative 11 37.93
Number of structural paths
Mean 8.00
Median 6.00
Range 3-24
Number of models with single-item constructs 6 20.69

Measurement model evaluation

Criterion Frequency
(n = 29)
(%)
Reflective measurement models
Indicator reliability
Indicator loadings 27 93.10
Construct reliability
Composite reliability 23 79.31
Cronbach’s alpha 14 48.28
Convergent validity
Average variance extracted 23 79.31
Discriminant Validity
Fornell–Larcker criterion 23 79.31
Cross loadings
HTMT criterion
5
0
17.24
0
Formative measurement models
Indicator’s contribution to the construct
Indicator weights 2 18.18
p or t values 2 18.18
Redundancy analysis 0 0
VIF 3 27.27
Tolerance 1 9.09

Structural model evaluation in hospitality research

Criterion Frequency
(n = 29)
(%)
Explained variance
Coefficient of determination R2 24 82.76
f2 effect size
Predictive relevance Q2
5
7
17.24
24.14
Path coefficient estimates
Beta 28 96.55
p or t values 16 55.17
Bootstrapping settings 17 58.62

Comparison of key criteria across disciplines

Criteria Management information systems (Ringle et al., 2012) Marketing (Hair et al., 2012b) Operations management (Peng and Lai, 2012) Strategic management (Hair et al., 2012a) Supply chain management (Kaufmann and Gaeckler, 2015) Tourism (do Valle and Assaker, 2016) Hospitality (this study)
Number of articles 65 204 42 37 75 44 29
Time-period 1992-2011 1981-2010 2000-2011 1981-2010 2002-2013 2000-2014 2001-2015
Reasons for using PLS (%) 70.77 n.r. 71.43 86.50 71 95.50 86.21
Number of latent variables
Mean 8.12 7.94 n.r. 7.5 7.05 6.02 7.03
Median 7 7 n.r. 6 6 6 7
Range 3; 36 2; 29 n.r. 2; 31 3; 21 1; 17 3; 20
Number of structural paths
Mean 11.38 10.56 10.4 8.77 7.55 8
Median 8 8 n.r. 9 8 7 6
Range 2; 64 1; 38 n.r. 2; 39 3; 25 0; 16 3; 24
Mode of measurement model
Only reflective (%) 42.20 42.12 n.r. 10.70 65.33 61.40 62.07
Only formative (%) 1.83 6.43 n.r. 10.70 0.00 2.30 0
Reflective and formative (%) 37.93 39.55 n.r. 50.00 34.67 36.40 34.48
Studies with single-item constructs (%) 47.69 46.30 n.r. 67.90 29.33 13.60 20.68
Higher-order constructs (%) 23.08 n.r. n.r. n.r. 34.67 11.40 31.03
Sample size
Mean 238.12 211.29 246 154.9 274.42 487 332
Median 198 159 126 83 168 321 382
Range 17; 1449 18; n.r. 35; 3,926 n.r. 35; 2,465 n.r. 106; 1,500
Fewer than 100 observations (%) 22.94 24.44 33.33 51.79 17.33 n.r. 0
Software used reported (%) 58.46 66.18 52.38 54.10 71.00 81.82 72.41
Resampling method mentioned (%) 93.85 66.18 52.38 54.10 71.00 34.09 41.37
Formative measurement model
Reflective criteria used (%) 14.29 23.08 26.32 25.00 23.08 0.00 0.00
Indicator weight (%) 68.57 23.08 73.68 38.24 73.08 100.00 18.18
Significance of weight (%) 57.14 17.48 4.41 19.230 82.35 18.18
Variance inflation factor (%) 25.71 11.89 21.05 1.50 38.46 64.71 27.27
Reflective measurement model
Indicator loadings (%) 88.61 61.81 n.r. 77.94 84.00 79.60 93.10
Composite reliability (%) 56.96 55.91 n.r. 45.59 78.67 97.70 79.31
Cronbach’s alpha (%) 10.13 40.94 n.r. 30.88 41.33 n.r. 48.28
Average variance extracted (%) 88.61 57.48 n.r. 42.65 81.33 95.50 79.31
Fornell–Larcker criterion (%) 78.48 55.91 n.r. 19.12 80.00 93.18 79.31
Cross loadings (%) 50.63 16.93 n.r. 19.12 74.67 20.45 17.85
Structural model
Coefficients of determination R2 (%) 96.33 88.42 85.71 80.40 94.67 93.20 82.76
f2 effect size (%) 11.93 5.14 14.29 10.70 10.67 11.40 17.24
Predictive relevance Q2 (%) 0.00 16.40 9.52 2.70 13.33 31.80 24.14
q2 effect size (%) 0.00 0.00 n.r. 0.00 0.00 0.00 n.r.
Path coefficient values (%) 98.17 95.82 100.00 95.50 97.33 95.50 96.55
Path coefficient significance (%) 98.17 92.28 100.00 95.50 97.33 95.50 55.17
Note:

n.r. = note reported

Guidelines for applying PLS-SEM in hospitality research

Criterion Recommendations/rules of thumb References
Data characteristics
Sample size Use power analyses, the inverse square root or gamma-exponential method Aguirre-Urreta and Rönkkö (2015), Kock and Hadaya (2018)
Data distribution Robust when applied to highly skewed data; assess degree of non-normality using standard tests; report skewness and kurtosis Hair et al. (2017d), Reinartz et al. (2009)
Measurement scales used Generally avoid dichotomous variables in endogenous constructs; carefully use and interpret categorical and dichotomous variables in exogenous constructs Bodoff and Ho (2016)
Missing data Remove an observation or variable when the number of missing values exceeds 15%; report the number or percentage of missing data and the missing data treatment option used. Preferably use multiple imputation Hair et al. (2017b), Mooi et al. (2018)
Model characteristics
Description of the structural model Provide graphical representation illustrating all of the structural model relations Hair et al. (2012b)
Description of the measurement models Include a complete list of indicators in the appendix Hair et al. (2012b)
Measurement mode of latent variables Substantiate the measurement mode of the latent variables Sarstedt et al. (2016b)
PLS-SEM algorithm settings and software used
Algorithm type Select composite-based (i.e. traditional PLS), or factor-based algorithms (PLSc) based on the types of constructs used in the model Dijkstra and Henseler (2015b) Sarstedt et al. (2016b)
Weighting scheme Use the path-weighting scheme Henseler (2010)
Maximum number of iterations Select a high value (e.g. 300) Hair et al. (2012b)
Software used Report software, including version to indicate the default settings
Parameter settings of procedures used to evaluate results
Number of bootstrap samples Select a high value (e.g. 10,000); must be greater than the number of valid observations Streukens and Leroi-Werelds (2016)
Blindfolding Use cross-validated redundancy Chin (2010)
Omission distance d The number of valid observations divided by the omission distance d must not be an integer; choose 5 ≤ d ≤ 10 Chin (2010), Hair et al. (2011)
Measurement model evaluation: reflective
Indicator reliability Standardized indicator loadings ≥0.70 (0.80) in the early (more mature) stages of research Hair et al. (2017b), Henseler et al. (2009)
Internal consistency reliability Consider Cronbach’s alpha as the lower bound and the composite reliability as the upper bound of the reliability, whereas ρA approximates the “true” reliability; establish an internal consistency reliability ≥0.70 Hair et al. (2017b), Tenenhaus et al. (2005)
Convergent validity Average variance extracted (AVE) ≥ 0.50 Chin (2010), Hair et al. (2017b)
Discriminant validity Use the HTMT criterion; run bootstrapping to test whether the HTMT differs significantly from 1 Henseler et al. (2015)
Measurement model evaluation: formative
Redundancy analysis Check the relationship between each formatively measured construct and the same construct measured by an overall single item, or by reflective items. The correlation should be >0.70 Hair et al. (2017b), Henseler et al. (2016a)
Significance of indicator weights Use the 95% bias-corrected and accelerated (BCa) bootstrap confidence intervals to assess the significance of the indicator weights Aguirre-Urreta and Rönkkö (2018), Streukens and Leroi-Werelds (2016)
Collinearity VIF <5 / tolerance >0.20; condition index <30 Hair et al. (2017b), Hair et al. (2012b)
Structural model evaluation
R2 An acceptable level depends on the research context; should usually be >0.25 for key target constructs Hair et al. (2017b)
Effect size f2 Values of 0.02, 0.15 and 0.35 indicate weak, moderate and strong effects Chin (2010), Cohen (1988)
Path coefficient estimates Use the 95% bias-corrected and accelerated (BCa) bootstrap confidence intervals to assess the significance of path coefficients. Standardized paths should be around 0.20 or above to be considered meaningful Aguirre-Urreta and Rönkkö (2018), Chin (1998a), Streukens and Leroi-Werelds (2016)
Predictive relevance Q2 and q2 Use blindfolding; Q2 >0 is indicative of predictive relevance; 0.02, 0.15, 0.35 for a weak, moderate and strong degree of predictive relevance Chin (2010), Hair et al. (2017b)
Assess the model’s out-of-sample prediction Use the PLSpredict procedure Shmueli et al. (2016)
Higher-order constructs
Establish the higher-order construct Use the repeated indicators, or two-stage approach; in case of reflective-formative or formative-formative higher-order constructs where the higher-order component is predicted by other constructs in the model, use the total effects analysis of the collect-type HCM approach Becker et al. (2012), Hair et al. (2018), Wright et al. (2012)
Assessment of higher-order construct Remember to treat the relationship between the lower-order components and the higher-order component as measurement model relationships (i.e. interpret the path coefficients as loadings, or weights, depending on the type of higher-order construct) Hair et al. (2018)
Moderating effects
Interaction effect Create an interaction term when the moderator is continuous or binary, and the study aims to assess the moderating effect on a specific structural model relationship
Use the orthogonalizing approach, or two-stage approach, to create the interaction term when the independent construct and the moderator construct are measured reflectively
Use the two-stage approach to create the interaction term when the independent construct and/or the moderator construct are measured formatively
Dawson (2014)
Fassott et al. (2016), Hair et al. (2017b)
Fassott et al. (2016), Hair et al. (2017b)
Multigroup analysis (MGA) Use MGA for a categorical moderator with two or more than two group when the goal is to assess its influence on all model relationships
Use the permutation test
Prior to performing an MGA, assess the measurement invariance of the composite models by using the MICOM approach
Dawson (2014), Hair et al. (2017b)
Sarstedt et al. (2011)
Henseler et al. (2016b)
Mediating effects
Assess the significance of the indirect effect, followed by the direct effect Use bootstrapping to assess the significance of the indirect and direct effects. In case of partial mediation, assess whether the mediation is complementary or competitive Hair et al. (2017b), Nitzl et al. (2016), Zhao et al. (2010)

Note

1.

The test statistic’s degrees of freedom correspond to those from standard OLS regression; that is, nk – 1, where n is the number of observations and k is the number of independent variables (Wooldridge, 2009).

References

Aguirre-Urreta, M.I. and Marakas, G.M. (2012), “Revisiting bias due to construct misspecification: different results from considering coefficients in standardized form”, MIS Quarterly, Vol. 36, pp. 123-138.

Aguirre-Urreta, M.I. and Rönkkö, M. (2015), “Sample size determination and statistical power analysis in PLS using R: an annotated tutorial”, Communications of the Association for Information Systems, Vol. 36, pp. 33-51.

Aguirre-Urreta, M.I. and Rönkkö, M. (2018), “Statistical inference with PLSc using bootstrap confidence intervals”, MIS Quarterly, forthcoming, available at: https://misq.org/forthcoming/?SID=kjm9t850mj6ah2v00hgutnc6l4

Aslanzadeh, M. and Keating, B.W. (2014), “Inter-channel effects in multichannel travel services”, Cornell Hospitality Quarterly, Vol. 55 No. 3, pp. 265-276.

Baron, R.M. and Kenny, D.A. (1986), “The moderator-mediator variable distinction in social psychological research: conceptual, strategic and statistical considerations”, Journal of Personality and Social Psychology, Vol. 51 No. 6, pp. 1173-1182.

Becker, J.-M. and Ismail, I.R. (2016), “Accounting for sampling weights in PLS path modeling: simulations and empirical examples”, European Management Journal, Vol. 34 No. 6, pp. 606-617.

Becker, J.-M., Klein, K. and Wetzels, M. (2012), “Hierarchical latent variable models in PLS-SEM: guidelines for using reflective-formative type models”, Long Range Planning, Vol. 45 Nos 5/6, pp. 359-394.

Becker, J.-M., Rai, A., Ringle, C.M. and Völckner, F. (2013), “Discovering unobserved heterogeneity in structural equation models to avert validity threats”, MIS Quarterly, Vol. 37 No. 3, pp. 665-694.

Beldona, S., Miller, B., Francis, T. and Kher, H.V. (2015), “Commoditization in the U.S. Lodging industry”, Cornell Hospitality Quarterly, Vol. 56 No. 3, pp. 298-308.

Benitez, J., Henseler, J. and Roldán, J.L. (2016), “How to address endogeneity in partial least squares path modeling”, 22nd Americas Conference on Information Systems (ACIS), San Diego, CA.

Bentler, P.M. and Huang, W. (2014), “On components, latent variables, PLS and simple methods: reactions to Rigdon’s Rethinking of PLS”, Long Range Planning, Vol. 47 No. 3, pp. 138-145.

Bodoff, D. and Ho, S.Y. (2016), “Partial least squares structural equation modeling approach for analyzing a model with a binary indicator as an endogenous variable”, Communications of the Association for Information Systems, Vol. 38, available at: http://aisel.aisnet.org/cais/vol38/iss31/23

Bollen, K.A. and Diamantopoulos, A. (2017). “In Defense of Causal–Formative Indicators: a Minority Report, Psychological Methods, Vol. 22 No. 3, pp. 581-596.

Cassel, C., Hackl, P. and Westlund, A.H. (1999), “Robustness of partial least-squares method for estimating latent variable quality structures”, Journal of Applied Statistics, Vol. 26 No. 4, pp. 435-446.

Castellanos-Verdugo, M., Oviedo-García, M.Á., Roldán, J.L. and Veerapermal, N. (2009), “The employee‐customer relationship quality: antecedents and consequences in the hotel industry”, International Journal of Contemporary Hospitality Management, Vol. 21 No. 3, pp. 251-274.

Cenfetelli, R.T. and Bassellier, G. (2009), “Interpretation of formative measurement in information systems research”, MIS Quarterly, Vol. 33, pp. 689-708.

Chin, W.W. (1998a), “Commentary: issues and opinion on structural equation modeling”, MIS Quarterly, Vol. 22, pp. 12-16.

Chin, W.W. (1998b), “The partial least squares approach to structural equation modeling”, in Marcoulides, G.A. (Ed.), Modern Methods for Business Research, Erlbaum, Mahwah, pp. 295-358.

Chin, W.W. (2010), “How to write up and report PLS analyses”, in Esposito Vinzi, V., Chin, W.W., Henseler, J. and Wang, H. (Eds), Handbook of Partial Least Squares: Concepts, Methods and Applications (Springer Handbooks of Computational Statistics Series, Vol. II), Springer, Heidelberg, Dordrecht, London, New York, pp. 655-690.

Chin, W.W., Thatcher, J.B., Wright, R.T. and Steel, D. (2013), “Controlling for common method variance in pls analysis: the measured latent marker variable approach”, in Abdi, H., Chin, W.W., Esposito Vinzi, V., Russolillo, G. and Trinchera, L. (Eds), New Perspectives in Partial Least Squares and Related Methods, Springer, New York, NY, pp. 231-239.

Cochran, W.G. (1977), Sampling Techniques, 3rd ed., Wiley, New York, NY.

Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd ed., Lawrence Erlbaum Associates.

Cohen, J.F. and Olsen, K. (2013), “The impacts of complementary information technology resources on the service-profit chain and competitive performance of South African hospitality firms”, International Journal of Hospitality Management, Vol. 34, pp. 245-254.

Dawson, J.F. (2014), “Moderation in management research: what, why, when, and how”, Journal of Business and Psychology, Vol. 29 No. 1, pp. 1-19.

Deng, W.J., Yeh, M.L. and Sung, M.L. (2013), “A customer satisfaction index model for international tourist hotels: integrating consumption emotions into the American customer satisfaction index”, International Journal of Hospitality Management, Vol. 35, pp. 133-140.

Diamantopoulos, A. (2008), “Formative indicators: introduction to the special issue”, Journal of Business Research, Vol. 61 No. 12, pp. 1201-1202.

Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P. and Kaiser, S. (2012), “Guidelines for choosing between multi-item and single-item scales for construct measurement: a predictive validity perspective”, Journal of the Academy of Marketing Science, Vol. 40 No. 3, pp. 434-449.

Dijkstra, T.K. (2014), “PLS' Janus face – response to professor Rigdon's ‘rethinking partial least squares modeling: in praise of simple methods”, Long Range Planning, Vol. 47 No. 3, pp. 146-153.

Dijkstra, T.K. and Henseler, J. (2015a), “Consistent and asymptotically normal PLS estimators for linear structural equations”, Computational Statistics & Data Analysis, Vol. 81, pp. 10-23.

Dijkstra, T.K. and Henseler, J. (2015b), “Consistent partial least squares path modeling”, MIS Quarterly, Vol. 39, pp. 297-316.

do Valle, P.O. and Assaker, G. (2016), “Using partial least squares structural equation modeling in tourism research: a review of past research and recommendations for future applications”, Journal of Travel Research, Vol. 55 No. 6, pp. 695-708.

Eurico, S.T., da Silva, J.A.M. and do Valle, P.O. (2015), “A model of graduates satisfaction and loyalty in tourism higher education: the role of employability”, Journal of Hospitality, Leisure, Sport & Tourism Education, Vol. 16, pp. 30-42.

Farrell, A.M. (2010), “Insufficient discriminant validity: a comment on Bove, Pervan, Beatty, and Shiu (2009)”, Journal of Business Research, Vol. 63 No. 3, pp. 324-327.

Fassott, G., Henseler, J. and Coelho, P.S. (2016), “Testing moderating effects in PLS path models with composite variables”, Industrial Management & Data Systems, Vol. 116, pp. 1887-1900.

Fornell, C.G. and Larcker, D.F. (1981), “Evaluating structural equation models with unobservable variables and measurement error”, Journal of Marketing Research, Vol. 18 No. 1, pp. 39-50.

Frías-Jamilena, D.M., Del Barrio-García, S. and López-Moreno, L. (2013), “Determinants of satisfaction with holidays and hospitality in rural tourism in Spain”, Cornell Hospitality Quarterly, Vol. 54 No. 3, pp. 294-307.

Fuller, C.M., Simmering, M.J., Atinc, G., Atinc, Y. and Babin, B.J. (2016), “Common methods variance detection in business research”, Journal of Business Research, Vol. 69 No. 8, pp. 3192-3198.

Gallarza, M.G., Arteaga, F., Del Chiappa, G. and Gil-Saura, I. (2015), “Value dimensions in consumers’ experience: combining the intra- and inter-variable approaches in the hospitality sector”, International Journal of Hospitality Management, Vol. 47, pp. 140-150.

Garson, G.D. (2016), Partial Least Squares Regression and Structural Equation Models, Statistical Associates, Asheboro.

Goodhue, D.L., Lewis, W. and Thompson, R. (2012), “Does PLS have advantages for small sample size or non-normal data?”, MIS Quarterly, Vol. 36, pp. 981-1001.

Gould, J., Moore, D., Karlin, N.J., Gaede, D.B., Walker, J. and Dotterweich, A.R. (2011), “Measuring serious leisure in chess: model confirmation and method bias”, Leisure Sciences, Vol. 33 No. 4, pp. 332-340.

Hair, J.F., Hollingsworth, C.L., Randolph, A.B. and Chong, A.Y.L. (2017a), “An updated and expanded assessment of PLS-SEM in information systems research”, Industrial Management & Data Systems, Vol. 117, pp. 442-458.

Hair, J.F., Hult, G.T.M., Ringle, C.M. and Sarstedt, M. (2017b), A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd ed., Sage, Thousand Oaks, CA.

Hair, J.F., Ringle, C.M. and Sarstedt, M. (2011), “PLS-SEM: indeed a silver bullet”, Journal of Marketing Theory and Practice, Vol. 19 No. 2, pp. 139-151.

Hair, J.F., Ringle, C.M. and Sarstedt, M. (2013), “Partial least squares structural equation modeling: rigorous applications, better results and higher acceptance”, Long Range Planning, Vol. 46 Nos 1/2, pp. 1-12.

Hair, J.F., Sarstedt, M., Pieper, T.M. and Ringle, C.M. (2012a), “The use of partial least squares structural equation modeling in strategic management research: a review of past practices and recommendations for future applications”, Long Range Planning, Vol. 45, pp. 320-340.

Hair, J.F., Sarstedt, M., Ringle, C.M. and Gudergan, S.P. (2018), Advanced Issues in Partial Least Squares Structural Equation Modeling (PLS-SEM), Sage, Thousand Oaks, CA.

Hair, J.F., Sarstedt, M., Ringle, C.M. and Mena, J.A. (2012b), “An assessment of the use of partial least squares structural equation modeling in marketing research”, Journal of the Academy of Marketing Science, Vol. 40, pp. 414-433.

Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M. and Thiele, K.O. (2017d), “Mirror, mirror on the wall: a comparative evaluation of composite-based structural equation modeling methods”, Journal of the Academy of Marketing Science.

Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M., Richter, N.F. and Hauff, S. (2017c), Partial Least Squares Strukturgleichungsmodellierung (PLS-SEM): Eine Anwendungsorientierte Einführung, Vahlen, München.

Henseler, J. (2010), “On the convergence of the partial least squares path modeling algorithm”, Computational Statistics, Vol. 25 No. 1, pp. 107-120.

Henseler, J. and Chin, W.W. (2010), “A comparison of approaches for the analysis of interaction effects between latent variables using partial least squares path modeling”, Structural Equation Modeling, Vol. 17 No. 1, pp. 82-109.

Henseler, J., Dijkstra, T.K., Sarstedt, M., Ringle, C.M., Diamantopoulos, A., Straub, D.W., Ketchen, D.J., Hair, J.F., Hult, G.T.M. and Calantone, R.J. (2014), “Common beliefs and reality about partial least squares: comments on Rönkkö & Evermann (2013)”, Organizational Research Methods, Vol. 17, pp. 182-209.

Henseler, J., Hubona, G.S. and Ray, P.A. (2016a), “Using PLS path modeling in new technology research: updated guidelines”, Industrial Management & Data Systems, Vol. 116, pp. 1-19.

Henseler, J., Ringle, C.M. and Sarstedt, M. (2015), “A new criterion for assessing discriminant validity in variance-based structural equation modeling”, Journal of the Academy of Marketing Science, Vol. 43 No. 1, pp. 115-135.

Henseler, J., Ringle, C.M. and Sarstedt, M. (2016b), “Testing measurement invariance of composites using partial least squares”, International Marketing Review, Vol. 33, pp. 405-431.

Henseler, J., Ringle, C.M. and Sinkovics, R.R. (2009), “The use of partial least squares path modeling in international marketing”, in Sinkovics, R.R. and Ghauri, P.N. (Eds), Advances in International Marketing, Emerald, Bingley, pp. 277-320.

Hwang, H., Malhotra, N.K., Kim, Y., Tomiuk, M.A. and Hong, S. (2010), “A comparative study on parameter recovery of three approaches to structural equation modeling”, Journal of Marketing Research, Vol. 47 No. 4, pp. 699-712.

Hwang, H., Takane, Y. and Malhotra, N.K. (2007), “Multilevel generalized structured component analysis”, Behaviormetrika, Vol. 34 No. 2, pp. 95-109.

Jarvis, C.B., MacKenzie, S.B. and Podsakoff, P.M. (2003), “A critical review of construct indicators and measurement model misspecification in marketing and consumer research”, Journal of Consumer Research, Vol. 30 No. 2, pp. 99-218.

Jarvis, C.B., MacKenzie, S.B. and Podsakoff, P.M. (2012), “The negative consequences of measurement model misspecification: a response to Aguirre-Urreta and Marakas”, MIS Quarterly, Vol. 36, pp. 139-146.

Jöreskog, K.G. and Wold, H.O.A. (1982), “The ML and PLS techniques for modeling with latent variables: historical and comparative aspects”, in Wold, H.O.A. and Jöreskog, K.G. (Eds), Systems under Indirect Observation, Part I, North-Holland, Amsterdam, pp. 263-270.

Kang, I., Shin, M.M. and Lee, J. (2014), “Service evaluation model for medical tour service”, Journal of Hospitality & Tourism Research, Vol. 38 No. 4, pp. 506-527.

Kang, J.-S., Chiang, C.-F., Huangthanapan, K. and Downing, S. (2015), “Corporate social responsibility and sustainability balanced scorecard: the case study of family-owned hotels”, International Journal of Hospitality Management, Vol. 48, pp. 124-134.

Kaufmann, L. and Gaeckler, J. (2015), “A structured review of partial least squares in supply chain management research”, Journal of Purchasing and Supply Management, Vol. 21 No. 4, pp. 259-272.

Ketchen, D.J. and Shook, C.L. (1996), “The application of cluster analysis in strategic management research: an analysis and critique”, Strategic Management Journal, Vol. 17 No. 6, pp. 441-458.

Kim, M.-J., Lee, C.-K. and Chung, N. (2013a), “Investigating the role of trust and gender in online tourism shopping in South Korea”, Journal of Hospitality & Tourism Research, Vol. 37, pp. 377-401.

Kim, M.J., Lee, C.K., Kim, W.G. and Kim, J.M. (2013b), “Relationships between lifestyle of health and sustainability and healthy food choices for seniors”, International Journal of Contemporary Hospitality Management, Vol. 25, pp. 558-576.

King, C. (2010), “One size doesn't fit all”: tourism and hospitality employees' response to internal Brand management”, International Journal of Contemporary Hospitality Management, Vol. 22 No. 4, pp. 517-534.

King, C., So, K.K.F. and Grace, D. (2013), “The influence of service Brand orientation on hotel employees’ attitude and behaviors in China”, International Journal of Hospitality Management, Vol. 34, pp. 172-180.

Kock, N. and Hadaya, P. (2018), “Minimum sample size estimation in PLS-SEM: the inverse square root and gamma-exponential methods”, Information Systems Journal, Vol. 28 No. 1, pp. 227-261.

Ku, E.C.S., Wu, W.-C. and Lin, A-r. (2011), “Strategic alignment leverage between hotels and companies: the buyer–supplier relationship perspective”, International Journal of Hospitality Management, Vol. 30 No. 3, pp. 735-745.

Lai, I.K.W. (2015), “The roles of value, satisfaction, and commitment in the effect of service quality on customer loyalty in Hong Kong–style tea restaurants”, Cornell Hospitality Quarterly, Vol. 56 No. 1, pp. 118-138.

Lastovicka, J.L. and Thamodaran, K. (1991), “Common factor score estimates in multiple regression problems”, Journal of Marketing Research, Vol. 28 No. 1, pp. 105-112.

Lohmöller, J.-B. (1989), Latent Variable Path Modeling with Partial Least Squares, Physica, Heidelberg.

Loureiro, S.M.C. (2014), “The role of the rural tourism experience economy in place attachment and behavioral intentions”, International Journal of Hospitality Management, Vol. 40, pp. 1-9.

Loureiro, S.M.C. and Kastenholz, E. (2011), “Corporate reputation, satisfaction, delight, and loyalty towards rural lodging units in Portugal”, International Journal of Hospitality Management, Vol. 30 No. 3, pp. 575-583.

Loureiro, S.M.C., Almeida, M. and Rita, P. (2013), “The effect of atmospheric cues and involvement on pleasure and relaxation: the spa hotel context”, International Journal of Hospitality Management, Vol. 35, pp. 35-43.

McIntosh, C.N., Edwards, J.R. and Antonakis, J. (2014), “Reflections on partial least squares path modeling”, Organizational Research Methods, Vol. 17 No. 2, pp. 210-251.

Marcoulides, G.A., Chin, W.W. and Saunders, C. (2012), “When imprecise statistical statements become problematic: a response to Goodhue, Lewis, and Thompson”, MIS Quarterly, Vol. 36, pp. 717-728.

Matthews, L. (2018), “Applying multi-group analysis in PLS-SEM: a step-by-step process”, in Latan, H. and Noonan, R. (Eds), Partial Least Squares Structural Equation Modeling: Basic Concepts, Methodological Issues and Applications, Springer, Heidelberg.

Min, H., Park, J. and Kim, H.J. (2016), “Common method bias in hospitality research: a critical review of literature and an empirical study”, International Journal of Hospitality Management, Vol. 56, pp. 126-135.

Mooi, E.A., Sarstedt, M. and Mooi-Reci, I. (2018), Market Research: The Process, Data, and Methods Using Stata, Springer, Heidelberg.

Nitzl, C. (2016), “The use of partial least squares structural equation modelling (PLS-SEM) in management accounting research: directions for future theory development”, Journal of Accounting Literature, Vol. 37, pp. 19-35.

Nitzl, C., Roldán, J.L. and Cepeda, C.G. (2016), “Mediation analysis in partial least squares path modeling: helping researchers discuss more sophisticated models”, Industrial Management & Data Systems, Vol. 116 No. 9, pp. 1849-1864.

Pavlatos, O. (2015), “An empirical investigation of strategic management accounting in hotels”, International Journal of Contemporary Hospitality Management, Vol. 27 No. 5, pp. 756-767.

Peng, D.X. and Lai, F. (2012), “Using partial least squares in operations management research: a practical guideline and summary of past research”, Journal of Operations Management, Vol. 30 No. 6, pp. 467-480.

Prud’homme, B. and Raymond, L. (2013), “Sustainable development practices in the hospitality industry: an empirical study of their impact on customer satisfaction and intentions”, International Journal of Hospitality Management, Vol. 34, pp. 116-126.

Qiu, H., Ye, B.H., Bai, B. and Wang, W.H. (2015), “Do the roles of switching barriers on customer loyalty vary for different types of hotels?”, International Journal of Hospitality Management, Vol. 46, pp. 89-98.

Reinartz, W.J., Haenlein, M. and Henseler, J. (2009), “An empirical comparison of the efficacy of covariance-based and variance-based SEM”, International Journal of Research in Marketing, Vol. 26 No. 4, pp. 332-344.

Richter, N.F., Cepeda Carrión, G., Roldán, J.L. and Ringle, C.M. (2016a), “European management research using partial least squares structural equation modeling (PLS-SEM): editorial”, European Management Journal, Vol. 34, pp. 589-597.

Richter, N.F., Sinkovics, R.R., Ringle, C.M. and Schlägel, C. (2016b), “A critical look at the use of SEM in international business research”, International Marketing Review, Vol. 33, pp. 376-404.

Rigdon, E.E. (1998), “Structural equation modeling”, in Marcoulides, G.A. (Ed.), Modern Methods for Business Research, Erlbaum, Mahwah, pp. 251-294.

Rigdon, E.E. (2012), “Rethinking partial least squares path modeling: in praise of simple methods”, Long Range Planning, Vol. 45 Nos 5/6, pp. 341-358.

Rigdon, E.E. (2016), “Choosing PLS path modeling as analytical method in European management research: a realist perspective”, European Management Journal, Vol. 34 No. 6, pp. 598-605.

Rigdon, E.E., Sarstedt, M. and Ringle, C.M. (2017), “On comparing results from CB-SEM and PLS-SEM. Five perspectives and five recommendations”, Marketing ZFP – Journal of Research and Management, Vol. 39 No. 3, pp. 4-16.

Ringle, C.M. and Sarstedt, M. (2016), “Gain more insight from your PLS-SEM results: the importance-performance map analysis”, Industrial Management & Data Systems, Vol. 116 No. 9, pp. 1865-1886.

Ringle, C.M., Sarstedt, M. and Schlittgen, R. (2014), “Genetic algorithm segmentation in partial least squares structural equation modeling”, Or Spectrum, Vol. 36 No. 1, pp. 251-276.

Ringle, C.M., Sarstedt, M. and Straub, D.W. (2012), “A critical look at the use of PLS-SEM in MIS quarterly”, MIS Quarterly, Vol. 36, pp. 3-14.

Roemer, E. (2016), “A tutorial on the use of PLS path modeling in longitudinal studies”, Industrial Management & Data Systems, Vol. 116, pp. 1901-1921.

Rönkkö, M. and Evermann, J. (2013), “A critical examination of common beliefs about partial least squares path modeling”, Organizational Research Methods, Vol. 16 No. 3, pp. 425-448.

Rönkkö, M., Antonakis, J., McIntosh, C.N. and Edwards, J.R. (2016), “Partial least squares path modeling: time for some serious second thoughts”, Journal of Operations Management, Vols 47/48, pp. 9-27.

Rönkkö, M., McIntosh, C.N. and Antonakis, J. (2015), “On the adoption of partial least squares in psychological research: caveat emptor”, Personality and Individual Differences, Vol. 87, pp. 76-84.

Ruizalba, J.L., Bermúdez-González, G., Rodríguez-Molina, M.A. and Blanca, M.J. (2014), “Internal market orientation: an empirical research in hotel sector”, International Journal of Hospitality Management, Vol. 38, pp. 11-19.

Sarstedt, M., Diamantopoulos, A. and Salzberger, T. (2016a), “Should we use single items? Better not”, Journal of Business Research, Vol. 69, pp. 3199-3203.

Sarstedt, M., Henseler, J. and Ringle, C.M. (2011), “Multi-group analysis in Partial Least Squares (PLS) path modeling: alternative methods and empirical results”, in Sarstedt, M., Schwaiger, M. and Taylor, C.R. (Eds), Advances in International Marketing, Volume 22, Emerald, Bingley, pp. 195-218.

Sarstedt, M., Ringle, C.M. and Gudergan, S.P. (2016c), “Guidelines for treating unobserved heterogeneity in tourism research: a comment on Marques and Reis (2015)”, Annals of Tourism Research, Vol. 57, pp. 279-284.

Sarstedt, M., Ringle, C.M. and Hair, J.F. (2017a), “Partial least squares structural equation modeling”, in Homburg, C., Klarmann, M. and Vomberg, A. (Eds), Handbook of Market Research, Springer, Heidelberg.

Sarstedt, M., Ringle, C.M. and Hair, J.F. (2017b), “Treating unobserved heterogeneity in PLS-SEM: a multi-method approach”, in Noonan, R. and Latan, H. (Eds), Partial Least Squares Structural Equation Modeling: Basic Concepts, Methodological Issues and Applications, Springer, Heidelberg.

Sarstedt, M., Ringle, C.M., Henseler, J. and Hair, J.F. (2014), “On the emancipation of PLS-SEM: a commentary on Rigdon (2012)”, Long Range Planning, Vol. 47 No. 3, pp. 154-160.

Sarstedt, M., Hair, J.F., Ringle, C.M., Thiele, K.O. and Gudergan, S.P. (2016b), “Estimation issues with PLS and CBSEM: where the Bias Lies!”, Journal of Business Research, Vol. 69, pp. 3998-4010.

Šeric, M., Gil, S.I. and Ozretić-Došen, Đ. (2015), “Insights on integrated marketing communications: implementation and impact in hotel companies”, International Journal of Contemporary Hospitality Management, Vol. 27, pp. 958-979.

Shmueli, G., Ray, S., Velasquez Estrada, J.M. and Chatla, S.B. (2016), “The elephant in the room: evaluating the predictive performance of PLS models”, Journal of Business Research, Vol. 69 No. 10, pp. 4552-4564.

So, K.K.F. and King, C. (2010), “When experience matters”: building and measuring hotel Brand equity: the customers' perspective”, International Journal of Contemporary Hospitality Management, Vol. 22 No. 5, pp. 589-608.

Steenkamp, J.-B.E.M. and Baumgartner, H. (2000), “On the use of structural equation models for marketing modeling”, International Journal of Research in Marketing, Vol. 17 No. 2-3, pp. 195-202.

Streukens, S. and Leroi-Werelds, S. (2016), “Bootstrapping and PLS-SEM: a step-by-step guide to get more out of your bootstrap results”, European Management Journal, Vol. 34 No. 6, pp. 618-632.

Sui, J.J. and Baloglu, S. (2003), “The role of emotional commitment in relationship marketing: an empirical investigation of a Loyalty Model for Casinos”, Journal of Hospitality & Tourism Research, Vol. 27 No. 4, pp. 470-489.

Tenenhaus, M., Esposito Vinzi, V., Chatelin, Y.-M. and Lauro, C. (2005), “PLS path modeling”, Computational Statistics & Data Analysis, Vol. 48 No. 1, pp. 159-205.

Úbeda-García, M., Claver Cortés, E., Marco-Lajara, B. and Zaragoza-Sáez, P. (2014), “Strategy, training and performance fit”, International Journal of Hospitality Management, Vol. 42, pp. 100-116.

Wendy Gao, B. and Lai, I.K.W. (2015), “The effects of transaction-specific satisfactions and integrated satisfaction on customer loyalty”, International Journal of Hospitality Management, Vol. 44, pp. 38-47.

Wooldridge, J.M. (2009), Introductory Econometrics. A Modern Approach, 4th ed., South Western, Cengage Learning, Mason, OH.

Wright, R.T., Campbell, D.E., Thatcher, J.B. and Roberts, N. (2012), “Operationalizing multidimensional constructs in structural equation modeling: recommendations for IS research”, Communications of the Association for Information Systems, Vol. 30, pp. 367-412.

Wu, H.-C., Li, M.-Y. and Li, T. (2014), “A study of experiential quality, experiential value, experiential satisfaction, theme park image, and revisit intention”, Journal of Hospitality & Tourism Research, Vol. 28, doi: 1096348014563396.

Zhao, X., Lynch, J.G. and Chen, Q. (2010), “Reconsidering Baron and Kenny: myths and truths about mediation analysis”, Journal of Consumer Research, Vol. 37 No. 2, pp. 197-206.

Acknowledgements

Even though this research does not explicitly refer to the use of the SmartPLS software (www.smartpls.com), Ringle acknowledges a financial interest in SmartPLS.

Corresponding author

Kisang Ryu can be contacted at: kryu11@sejong.ac.kr