Double moderated mediation models: problems and (part) remedies
Abstract
Purpose
Researchers in management regularly face modelling issues that involve double-moderated mediation models. Here, the author illustrates how to conceptualise, specify and empirically estimate mediation effects when having to simultaneously account for continuous (Likert type) and nominal (i.e. group) moderator variables. Researchers’ estimates of the mediation effects suffer serious bias because of the effects of unaccounted confounders. This is an issue that plagues management research, and this study aims to show how to address these valid reservations for its focus models. In aiming to inform a wider management audience, the study deliberately uses the rich context of a focus case as this allows the author to clarify the nuances that management researchers face applying double-moderated mediation models. Specifically, the study’s focus case is on professionals’ willingness to implement a new government policy. The study also combines traditional and Bayesian statistical approaches and explains the differences in estimation and interpretation that are associated with the Bayesian approach. Explaining, and exemplifying the use of, the models, the author focuses on how one can substantially increase the robustness of the methods used in management research and can considerably improve the quality of the generated theoretical insights. The study also clarifies important assumptions and solutions.
Design/methodology/approach
The study uses a doubled moderated mediation Bayesian approach, and draws the sample data from a population of 5,199 professionals, all members of either the Dutch Association of Psychologists or the Dutch Association for Psychiatry. The data collection process resulted in 1,307 questionnaires being returned, a response rate of 25 per cent. All the items were measured using a Likert scale, ranging from “strongly disagree” to “strongly agree”, unless stated otherwise.
Findings
Explaining, and exemplifying the use of, the models the study focuses on how one can substantially increase the robustness of the methods used in management research and can considerably improve the quality of the generated theoretical insights.
Originality/value
This is an original approach exemplified for wider use by management researchers.
Keywords
Citation
Chryssochoidis, G. (2018), "Double moderated mediation models: problems and (part) remedies", Journal of Modelling in Management, Vol. 13 No. 1, pp. 50-80. https://doi.org/10.1108/JM2-06-2016-0053
Download as .RISPublisher
:Emerald Publishing Limited
Copyright © 2018, Emerald Publishing Limited
Introduction
Management researchers regularly face two important problems in their modelling endeavours.
The first problem
This relates to the conceptualising, specifying for and empirically estimating of indirect (mediation) effects where one moderator is continuous (e.g. a psychological construct) and a second simultaneous moderator is nominal (e.g. gender). Traditionally, researchers follow Baron and Kenny (1986) and adopt the logic of an antecedent variable (X) influencing an outcome (Y) via an intervening mediator variable (M). A “moderated mediation” model is one where a covariate (Z) moderates the mediation effect (MacKinnon et al., 2007). The mediated effect varies with the level of the covariate (Valeri and VanderWeele, 2013, p. 142; also see Edwards and Lambert, 2007, p. 4). Graphically, mediation is depicted in “Model 4” in Hayes (2013) and moderated mediation is conceptualised in, for instance, Models 8 or 59 in Hayes (2013). A high-profile case used by Kline (2011, p. 333) in explaining the problem is Lance’s (1988) study, which focused on the relationship between recall accuracy of a lecture script (Y), memory demand (X), complexity of social perception (Z) and an interaction effect (between X and Z). The model also included a mediator, namely, “recollection of behaviours mentioned in the script” (M).
However, testing mediation without simultaneously controlling for both a continuous and nominal moderator (for instance, gender, as in Lance, 1988) is neither easy nor without biases. Including both moderators enables investigating the complex pathways of co-influence. For instance, a continuous moderator may influence the mediation effect in Group A differently/dissimilarly than in Group B. This refers to the direction of effect, its shape and the lower/upper bounds. Here, we demonstrate how to conceptualise, specify and empirically test such double-moderated mediation models using our context case.
The second problem
This refers to the substantive, and untenable, assumptions implicitly made when identifying direct and indirect effects while modelling mediation (Baron and Kenny, 1986). The validity of commonly used analysis critically relies on safeguarding against the so-called sequential ignorability assumption (Imai et al., 2010a, 2010b). Safeguarding, explained simply, has two parts (Imai et al., 2010a, 2010b, p. 310): ensuring that there is no unmeasured confounder (meaning a co-influencing, but non-measured, variable) of the M–Y relationship and that any M–Y confounder is unaffected by X (Muthén, 2011, p. 8). There is consensus that the latter cannot, under any circumstances, be ensured, and this implies that causal effects cannot be identified (VanderWeele and Vansteelandt, 2009; Imai et al., 2010a, 2010b; VanderWeele, 2010; Muthén, 2011). There are several reasons for this. Firstly, participants’ attribution of scores to questions on predictors and outcomes means that counterfactual outcomes are never observed (Yamamoto, 2012, p. 239), and so, these remain an unobservable quantity. Next, the selection of X and M variables is rarely random. Management researchers may simply be unable to randomise the studied variables in observational studies (Imai et al. 2011, p. 53). Theoretical frameworks in management may contain variables that do not vary randomly; and some may even stem from one another (Antonakis et al., 2014). One also cannot preclude the possibility of multiple covariates (i.e. additional predictors) confounding the estimates (Imai et al., 2011). “Confounding” has been defined primarily as non-modelling model-relevant variables (“confounders”; VanderWeele and Shpitser, 2013) resulting in inaccurate estimates (Antonakis et al., 2010). Next, even if the X and M variables are randomised, the mediation effects cannot be identified unless an additional constraint, that there is no interaction effect between X and M, is assumed (Robins, 2003; Imai et al., 2010b, p. 56). Simply put, without testing the impact of unobserved covariates, the estimates may be distorted and produced theory may be biased. This plagues the current management research (Antonakis et al., 2010). Antonakis et al. (2010, 2014) observe that many simply fail to understand the seriousness of the matter.
Does using moderators diminish the strength of these problems? No, on the contrary; their existence increases the limitations. This is nicely explained by Valeri and VanderWeele (2013, p. 138):
While the concept of mediation, ., is theoretically appealing, the methods traditionally used to study mediation empirically have important limitations concerning their applicability in models with interactions or nonlinearities (Pearl, 2001; Robins and Greenland, 1992).
In essence, if there are confounders of the X–Y, M–Y or X–M relationships, these should be controlled for and the sensitivity of the estimates must be tested (Valeri and VanderWeele, 2013, p. 142). Moreover, sensitivity is about confidence. Even when all the above issues have been addressed, and parameter estimates are adjusted, the degree of confidence in the results is still unknown. A sensitivity test identifies upper and lower bounds and quantifies confidence regarding the estimates. These reservations must be addressed to secure robust results. We demonstrate how to adjust – in a tripartite manner – the double-moderated mediation estimates for the effect of unaccounted confounders. Specifically, we adapt the Muthén (2011) procedure for estimating the tripartite effects of unaccounted confounders. We also calculate the confidence one can place on the mediation estimates. Summarising, we, therefore, aim to contribute by explaining and exemplifying:
the use of double-moderated mediation models accounting for both continuous (note that this is of Likert type in our data) and nominal moderating variables; and
how to address reservations in such models because of sequential ignorability issues and we focus on the M–Y link.
A relevant new aspect is also demonstrating the tripartite manner by which to control for confounders in such models while calculating the confidence in the estimates.
We aim to make these developments accessible to a wide audience of management researchers, and we link to graphical representations provided by Hayes (2013) and demonstrate our approach using a context case, as explained next.
The context case
In 2008, as part of a wider new Health Market Organization Law, the Dutch government introduced Diagnosis Related Groups (DRGs) in mental healthcare. Implementing DRGs to improve transparency and to control costs is in line with a trend seen in various countries (such as Australia, China, Germany and the USA; Kimberly et al., 2009). The previous system meant that the more sessions a mental healthcare professional (such as a psychologist or psychiatrist) had with a patient, the more recompense could be claimed, but this was judged to be inefficient (Kimberly et al., 2009). The DRG policy changed the situation and stipulated a standard rate for each disorder. For instance, for a mild depression, the mental healthcare organisation receives a standard rate, and can treat the patient, directly and indirectly, for between 250 and 800 minutes. This policy has been seen as a shift to more efficient resource use (Hood, 1991, p. 5). However, rather than simply implementing this new DRG policy, psychologists and psychiatrists started to forcefully resist it: they demonstrated against it, set up negative press websites and some even quit their job (Smullen, 2013). In one large-scale survey, about 90 per cent of such professionals wanted the DRG policy to be abandoned (Palm et al., 2008). The following quotation from a health-care professional is illustrative (cited in Tummers, 2012, p. 516):
Within the new healthcare system, economic values are leading. Too little attention is being paid to the content: professionals helping patients. The result is that professionals become more aware of the costs and revenues of their behaviour. This comes at the expense of acting according to professional standards.
Willingness to implement the policy (our dependent variable Y)
We use “willingness to implement the policy” (Will) as our dependent variable (Y) to reflect our context case of professionals’ behavioural intention towards adopting the proposed government policy. Drawing on Metselaar (1997, p. 42), we define willingness to implement a policy as a:
[…] positive behavioural intention towards the implementation of modifications in an organization’s structure, or work and administrative processes, resulting in efforts from the organization member’s side to support or enhance the change process.
In our context, willingness to implement the DRG policy amounts to professionals being willing to invest energy in implementing this policy, not intending to sabotage it and being willing to convince colleagues of the benefits of the policy. As a reflection of this intended behaviour, willingness to implement the policy can be assumed to lead to actual behaviour (Fishbein and Ajzen, 2009). Willingness to implement the policy is also a function of both institutional social norms and individual aspects, such as attitudes (Ajzen, 1991; Fishbein and Ajzen, 1975, 2009), which we explain below.
Institutional social norms (our variable X)
Institutionally based social norms such as colleagues’ opinions (COL) span a continuum from negative to positive, and such opinions can capture the prevalent institutional stance towards altering institutional logics (DiMaggio and Powell, 1983, 1991). A social norm can be defined as “the perceived social pressure to perform or not to perform a behaviour” (Ajzen, 1991, p. 188). Such a social norm is based on the beliefs of “significant others” towards the focus behaviour. In the case of professionals implementing a policy, the relevant “significant others” are their own professional colleagues. These colleagues constitute the institutional field in which the individual professionals work (Muzio et al., 2013). Thus, in our case, when colleagues are extremely positive about the new governmental policy, other individual professionals may, because of peer pressure, be more willing to engage in implementing the new policy. Hence, relevant questions will include: Do colleagues support the policy, or do they talk negatively about the change during meetings? This “social norm” is our independent variable (X) and would be graphically represented by a direct pathway (X→Y), where colleagues’ opinions (COL) affect willingness (Will) to implement the new policy.
Attitudes (our mediator variable M)
Individuals interpret institutional social norms in deciding their own behavioural intentions towards institutional logics. Individuals may not be willing to implement suggested changes (Dent and Goldberg, 1999; Ford et al., 2008; Higgs and Rowland, 2005; Piderit, 2000) because their personal attitudes towards the focus behaviour are contrary to the social norms. Individuals may have their own individual interpretation of aspects relevant to the proposed institutional logics on the basis of their own knowledge or beliefs. Conversely, positive personal attitudes may positively affect one’s willingness to implement a proposed change. In our case, such an attitudinal element is the meaningfulness of the policy for society as perceived by the individual professionals (May et al., 2004). Rewording, societal meaningfulness (SM) for the professionals is, therefore, the perception that the policy contributes to socially relevant goals. That is, does the DRG policy benefit society? Does it really contribute to, for instance, greater efficiency or transparency? Attitudes are then formed within the framework of a self-expected personal stance towards professional matters. SM impacts upon their subsequent willingness or, otherwise, to implement the new policy.
What is a possible mediational mechanism and working pathway for the functioning of societal meaningfulness?
We theorise that institutional social norms are precursors to singular views, but individual attitudes filter and channel the influence of antecedent social norms through their own individual interpretations of the outcome these norms may bring (Meyers and Vorsanger, 2003; Higgs and Rowland, 2005). Such a conceptualisation can be specified in terms of a mediation effects model (Preacher et al., 2007) where the positive behaviour of colleagues (X) results in the willingness of professionals to adopt government plans (Y), albeit this relationship is mediated by the degree of societal meaningfulness (our SM).
Moderation effects (our moderator Z and N variables)
We have argued that individual attitudinal processes, wholly or partially, substitute for and reconfigure the impact of logics to produce an eventual outcome. However, we cannot assume that such impact and reconfiguration take place irrespective of the context. We would expect aspects of the context, such as professional work context and individual issues related to work, to have an impact. These, it is argued, condition the relationship linking social norms, attitudes and intended behaviour. For instance, Freidson (2001) and Powell and Colyvas (2008) suggest that the environment’s impact on attitudes and actions is dependent on contexts. This introduces the notion of moderation as an influence in our mediation framework.
The first moderator
Job Satisfaction (JS) is our first moderator (our variable Z) and its interaction term with Col (X) is expressed as ColxJS (XZ). Job satisfaction is seen as one of the core attitudinal outcomes in the work context and as a prime candidate to reflect an individual person’s contexts and also interpretation of such professional contexts (Griffin et al., 1999). More specifically, social exchange theories (Janssen and Van Yperen, 2004) and identity theory (Ashforth and Mael, 1989; Tyler and Blader, 2001; Ashforth et al., 2008) argue that satisfied employees often have stronger ties with their colleagues. As such, they are more influenced by the attitudes and behaviours of their colleagues, and this provides strong support for accepting JS as reflecting individual contexts within a profession and the personal interpretation of the role of that profession. It is, thus, expected that, particularly for satisfied employees, the behaviour of colleagues will be important for shaping their perceptions of the value of the DRG policy, in turn influencing their willingness to implement it. That is because satisfied people generally feel more attached to their environment, as evidenced in work on social exchange (Janssen and Van Yperen, 2004) and identity theory (Ashforth and Mael, 1989; Tyler and Blader, 2001; Ashforth et al., 2008). Satisfied people are less isolated and care more about what others think and do, and this, therefore, more strongly shapes their own attitudes and actions. Our theoretical formulation indicates, therefore, a moderation effect upon two paths, namely, X→M and X→Y denoting at the same time, owing to lack of clear theoretical support, exclusion of a moderating influence of Z on the M→Y path. In doing so, our model resembles Model 8 of Hayes (2013).
The second moderator
Profession (a nominal variable) is our second moderator (our variable N). It has been established that for people working in individualistic as opposed to collectivistic settings, the influence of social norms on attitudes and behavioural intention is lower (Triandis, 1989; Markus and Kitayama, 1991). In our illustrative case, there are two distinct professional groups that were expected to adopt the proposed government plan: the psychiatry and the psychology professions. These professions can be considered quite different, thereby providing a solid base to treat them as distinct professional fields (Neukrug, 2011). Psychiatrists usually undergo a medical education and are, thus, medical doctors, whereas psychologists are not. Psychologists have usually received a scientific education before subsequent professional training. Onyett et al. (1997) have shown that, of the two groups, psychiatrists work more individualistically and less intensively in teams. They score higher on depersonalisation, a quality which lessens the impact of others on one’s own beliefs (Deary et al., 1996; Onyett et al., 1997; Guthrie et al., 1999). On this basis, we would expect the relationship between the behaviour of colleagues and willingness to implement, mediated by societal meaningfulness, to be stronger for psychologists than for psychiatrists.
Answers to the two problems
Answering the first problem, namely, modelling double-moderated mediation
Our theoretical stance requires a mediational model that simultaneously takes account of two co-influencing conditional processes. The problem is exacerbated because one of these processes is nominal (profession) and the other is a continuous (in our case Likert type) variable. The solution we propose is to specify the above conceptual framing as a double-moderated mediation model. This can be summarised using two regression equations. The first regression equation predicts the outcome Y, namely, the willingness to implement the proposed government plan (Will), using the four predictors we have selected as follows:
Here,
Thus, the direct moderation effect is then:
Adjustments to the demonstrated equations will be required if the researcher follows a different (for instance Model 59 of Hayes, 2013) conceptualisation.
Answering the second problem: satisfying the sequential ignorability assumption modelling issue and calculating sensitivity
The classical mediation analysis (usually based upon Baron and Kenny, 1986; and MacKinnon et al., 2002, 2007), or Bollen (1989) in a SEM context, is seriously questioned. The direct and indirect effects identified through the traditional method may not actually be causal (Holland, 1988; Sobel, 2008). There are important issues at stake, and the existing assumptions are simply untenable and unfulfilled in practice (Muthén, 2011, p. 7). VanderWeele and Vansteelandt (2009) and Imai et al. (2010a, 2011) provide a detailed technical and formal background to the assumptions behind the causally defined direct and indirect effects. Focusing on research contexts involving experimental treatments (mostly binary), Valeri and VanderWeele (2013) summarise the assumptions in the modelling as:
there is no unmeasured confounding factor in the treatment (independent X) – outcome (Y) relationship;
no unmeasured confounding within the mediator (M) – outcome (Y) relationship;
no unmeasured treatment (independent X) – mediator (M) confounding; and
no mediator (M) – outcome (Y) confounder affected by treatment (independent X).
The last assumption is almost certainly violated even in “random” data (Holland, 1988; Sobel, 2008; Bullock et al., 2010). In brief, it is difficult to defend that the model we investigate here is not immune to unobservable confounder effects. Antonakis et al. (2010, p. 1091) argue that such confounders may relate to group/sample selection, reverse causality, imperfect measures, common-method variance, heteroscedasticity or cluster-robust standard errors in panel data or, simply, model misspecification.
How can this gap be addressed?
Causally defined effects can only inferred more accurately by conducting additional analyses and subjecting the specified models to further constraints (see also Emsley et al., 2010; Muthén, 2011, p. 3; Valeri and VanderWeele, 2013). Imai et al. (2010b) and Muthén (2011) propose different methods to account for the potential confounding effects of unobserved covariates in moderated mediation albeit their focus is on the M–Y link. They provide a method to calculate the extent of the impact because of the residual covariance of non-identified covariates. They also suggest an additional sensitivity analysis to test for the lower and upper statistical boundaries of the impact from violating the basic assumptions. We implement in a tripartite way, the Muthén (2011) procedure to measure the impact of unobserved covariates (the variable denoted “u” in Figure 1). “Tripartite” refers to estimating the effects of confounders and sensitivity for the mediation pathway γ1* β1 while controlling for two additional pathways, namely, γ2* β1 and γ3* β1 (Figure 2).
Assumptions
Like Muthén (2011), our sensitivity analysis concentrates only on the possibility of a hidden confounding in the M–Y relationship and by definition disallows other confounding – especially affecting the independent (X) or the X–M relationship (Antonakis et al., 2010, p. 1091). Implicitly focusing on the M–Y relationship, which goes back to logic and research traditions used in areas such as clinical trials and epidemiology where experiments (seen as the gold standard) measure the effects of health interventions. Given the design of such experiments and random assignment of participants to control and treatment groups permitted X and M variables to be conceptualised and treated as exogenous. Later, though, researchers suggested that corrections are also required on the effect of hidden confounding in the X–M relationship (Jo et al., 2011). In addition, it was in economics that they also realised that the assumption of exogeneity regarding the independent X may not hold for a variety of reasons too (issue also applicable regarding the mediator). Sample-selection bias may be an issue and Heckman (1979) provided a solution. Another assumption is that X is unaffected by random disturbances or measurement error. The use of instrumental variables to correct the estimates was suggested (Sargan, 1958). Exogeneity regarding the nature of the moderating variables (here Z and N) is also assumed. Next, that there is absence of mediated moderation (i.e. no interactions in the effect on outcome) and obviously no further hidden confounding on the direct effect of X on Y. A separate, relevant in management research, source of endogeneity is the assumption of lack of common method variance (CMV) bias (Podsakoff et al., 2003). CMV bias is attributed to simultaneous measurement of multiple constructs and use of single respondents.
What further steps can be taken to test our assumptions?
These are explained next. To test and correct the lack of endogeneity regarding the independent (X), a researcher can proceed to test for sample-selection bias using Heckman’s procedure (see, for instance, the procedure “heckman” in Stata; Clougherty et al., 2015 provide further details). Garen (1984) has provided a remedy for continuous variables. Testing can use 2SLS or 3SLS estimation (Antonakis et al., 2010; see, for instance, procedure “reg3” in Stata). Bascle (2008) explain relevant testing and comment on the problem of weak instruments. Testing and correcting for hidden confounders in the X–M relationship can use methods such as propensity scores (see Li, 2013 for further details). Testing and correcting for CMV bias can be implemented via several methods, some of which cater for variance, which is congeneric (i.e. coming from the same sources of method bias causes) or non-congeneric (i.e. coming from different sources of method bias causes). An excellent start is Lindell and Whitney (2001), who use the correlation marker approach, albeit the CFA Marker approach may be superior in detecting CMV biases (Richardson et al., 2009, Williams et al., 2010). Antonakis et al. (2010, 1106-Figure A) also provides a correction to the CMV bias using instrumental variables. Further testing is needed when links between the independent variable and the moderators Z and N are not orthogonal (i.e. they are correlated). Such assumption (sometimes strong and implausible) is almost certainly violated when several mediators and/or moderators are introduced in the model or if these have common causes themselves. Non-zero error covariance will then likely remain even after correction is applied. Another assumption refers to causal identification which is a different concept to statistical identification (i.e. seeking unique values for each parameter). Additional instrumental variables may be required to help establish causal identification. Every parameter should be “causally identified” (Semnet, 2016). Last but not the least, in causal reasoning (unlike associational reasoning mostly practiced under a SEM framework), the definition of direct and indirect effects involve quantities that are not all observable – Y(x): the potential values of Y that would have occurred had X been set, possibly counter to fact, to the value x; and M(x): the potential values of M that would have occurred had X been set, possibly counter to fact, to the value x. Similarly for Y(x, m) and Y(x, M(x*)). Pearl (2009) clarifies this logic, and Bollen and Pearl (2013) provide the overview and delineate the causal assumptions in current SEM practice.
In sum, our effort is a focused insight to correct for confounding in specific parts of a moderated mediation modelling effort, which is, however, also characterised by its own assumptions. Researchers will, therefore, be advised to clarify the exact nature of their moderated mediation model and carefully consider the assumptions in their effort and the necessary corrections.
Data and measures
Data and measures
We draw our sample data from a population of 5,199 professionals, all members of either the Dutch Association of Psychologists (NIP) or the Dutch Association for Psychiatry (NVvP). The data collection process resulted in 1,307 questionnaires being returned, a response rate of 25 per cent. These included 761 psychologists (Group A) and 546 psychiatrists (Group B). All the items were measured using a five-point Likert scale, ranging from “strongly disagree” to “strongly agree”, unless stated otherwise. The dependent variable (Y) was measured using the validated four-item scale of Metselaar (1997), which is based on Ajzen (1991). A sample item was “I am willing to contribute to the introduction of the DRG policy”. The antecedent variable (X) was measured using a validated eight-item scale by Metselaar (1997). Here, the respondents could answer either yes (1) or no (0). Sample items were “Colleagues talk negatively about the DRG policy during meetings” (reversed) and “Colleagues support the DRG policy”. The collegial behaviour score, a formative measure, is calculated by summing the eight-item scores and ranges from 0 (very negative) to 8 (very positive) (Diamantopoulos and Winklhofer, 2001). The mediation variable (M) was measured using a five-item validated scale (Tummers, 2012) that allows the researcher to use templates to specify the goal (here, enhancing efficiency in mental healthcare) and the policy to achieve this goal (the DRG policy). A sample item is “Overall, I think that the DRG policy leads to more efficiency in mental healthcare”. Our first moderator variable (JS) (Z) was measured using a single item: “Overall, I am satisfied with my job”. We opted for a single item measure on the basis that Nagy (2002, p. 85) states that measuring job satisfaction with one item “is more efficient, is more cost-effective, contains more face validity and is better able to measure changes in job satisfaction”. Furthermore, we asked the professionals to indicate their profession (our second, nominal moderator N).
Analysis
Measures
Firstly, we present descriptive statistics of the variables in Table I. Psychologists were more positive than psychiatrists about the DRG policy, for instance, by scoring more positively (by 0.24, p < 0.01) regarding its implementation. All the bivariate correlations for the main variables were statistically significant.
We subsequently carried out a confirmatory factor analysis (CFA) of the latent constructs to be able to report validity and reliability estimates of our factorial structures in line with current practices. The CFA of the latent construct of the Y dependent, using maximum likelihood (ML) estimation, exhibited a good fit to the data (RMSEA = 0.08; CFI = 0.99; TLI = 0.98) with standardised factor loadings between 0.58 and 0.86. The average variance extracted (AVE) were 0.56 and 0.57 and the composite reliability (CR) were 0.83 and 0.84 for the two groups, respectively: values that indicate the measure is valid and reliable. The loadings were also high (>0.86) for our mediator M (SM), with AVE of 0.83 and 0.82 and CR of 0.96 and 0.95, respectively. Finally, a multiple group model, assuming measurement invariance (Van de Schoot et al., 2012), also demonstrated a good fit to the data (RMSEA = 0.07; CFI = 0.98; TLI = 0.98). Figure 3 shows the loadings on the SM and Y constructs.
The item scores for the exogenous measure involved in the interaction XZ and for the endogenous M measure were centred before the subsequent models’ estimation. We centred to eliminate any impact on the statistical identification of priors regarding our variables. “Priors” refer to what type (shape) of distribution we declare to express our initial uncertainty about our parameters. Bayesian estimation combines previous distributions of parameters with data likelihood to form posterior distributions for the parameter estimates. Thus, the first reason to centre was to decrease the impact on the distribution of priors used in the estimations. A second reason was to minimise any effect because of multicollinearity between the independent variables, the moderator and the interaction effects. Grand mean centring was used as the alternative (Group Centring), would introduce group inequality bias.
Structural equation models: use, estimation and interpretation of Bayesian estimates
Why to use Bayesian statistics and what are the differences in interpreting? Using Mplus v7.11 (Muthén and Muthén, 1998/2014; Muthén and Asparouhov, 2012), we used Bayesian estimation credibility intervals (CI; Gelman et al., 2004; Yuan and MacKinnon, 2009) rather than maximum-likelihood-based confidence intervals in all the subsequent analyses. We opted for Bayesian statistics primarily because of the usefulness of the interpretations of the Bayesian parameter estimates. Here, one should be aware of the differences in interpretation between the frequentist and the Bayesian approaches. For example, the 95 per cent Bayesian CI can be interpreted as the interval that contains the population parameter with a 95 per cent probability and this can be used to determine a significance difference from zero (i.e. the 95 per cent CI does not include zero) or significant differences between groups (the 95 per cent CIs do not overlap). Secondly, and quite importantly, we favour Bayesian statistics because when indirect effects are being estimated (for mediation), or interaction effects for moderation, the parameter estimates are never normally distributed and, therefore, should not be tested using the default Wald test (MacKinnon et al., 2002). Frequentist estimation techniques usually produce symmetric confidence intervals and, therefore, conclusions based on these will be biased. To accommodate the non-normal distribution of indirect or interaction effects, most scholars use bootstrapping to compute asymmetric confidence intervals (Preacher and Hayes, 2008). An alternative procedure is to use a Bayesian approach. Both methods use an iterative process in which all the parameter estimates of the model (e.g. regression parameters, variances, etc.) are estimated and these can then be summarised by plotting the results obtained in each iteration and using this distribution to compute their means and CIs. Moreover, technically, a Bayesian approach estimates posterior distributions, whereas a frequentist approach computes only one estimate per parameter. In the Bayesian approach, conditional sampling is used, where each iteration is dependent on the previous iteration. This is not the case with bootstrapping (for an in-depth discussion on the differences between Bayesian parameters and maximum likelihood parameters see Kruschke et al., 2012 or Van de Schoot et al., 2013). When we reanalysed all our models using bootstrapping, there were some numerical differences regarding the estimates, but the conclusions drawn would not have been any different if bootstrapping were used. In addition, uninformative priors and large samples result in Bayesian and frequentist results being very similar numerically, but the two approaches allow very different interpretations of these results. While the numerical point estimates may be similar, interpretations of the Bayesian results allow one to draw inferences about the probability of the parameters themselves. Furthermore, there is no reason not to perform the Bayesian computation using construct measures that have been validated using traditional methods.
Decisions to take.
Bayesian estimation requires decisions on several issues explained next. The first decision is whether to use specific (i.e. informative) or non-specific (i.e. uninformative) priors. This constrains the possible range of values that the algorithm can sample from. We used the default of uninformative priors with diffuse (i.e. vague) priors (e.g. β_{ι}∼ Ν (0, 1.0 + 6E); σ_{ι}^{2}∼ IGamma (0.001, 0.001; Congdon, 2006; Wang and Preacher, 2015). Theoretically driven and empirically tested in previous research, informative priors can lead to the parameter estimates being more accurate and the estimation more efficient. The use of diffuse distributions is however advisable when (as in our case) past theory cannot confidently suggest the distribution shape or the numerical values of the target variables.
A second issue relates to starting values. As iterations may perform better if one commences from a suitable starting point, we used the maximum likelihood estimates (ML) as starting values. To improve the situation further, we also specified that 50 random sets of starting values (all around the ML estimates) were to be generated in the initial stage, and 10 optimisations carried out in the final stage before the Monte Carlo Markov Chains (MCMC) chains are initiated. A Markov Chain is a mathematical system that transits from one state to another in a memory-less manner, such that the next state depends only on the current state and not on the sequence of events that preceded it (Norris, 1998). MCMC are algorithms (i.e. step-by-step calculation procedures) for sampling from probability distributions to build a Markov chain (Fishman, 1995). For our sampling, we used the Gibbs sampling procedure (Gilks et al., 1996) which is a “random-walk” procedure, i.e. one that randomly explores among all possible numerical values. However, Gibbs sampling requires it to be possible to exactly sample all parts of the target distribution. Specifically, Gibbs sampling iteratively draws samples from the assigned conditional distribution of all the parameters. When used with “diffuse” distributions (i.e. ones that are not predetermined), as here, it ensures representation of all potential numerical values.
A third issue concerns with how many of these Gibbs sampling MCMC chains will be used. We requested as many chains as the processors of the PC we used (namely, 8) as this allows faster computation. A fourth issue relates to the number of iterations to be undertaken by each MCMC chain. We have requested a minimum of 20,000 and a maximum of 100,000 iterations. Convergence (with a value of = 0.01) is confirmed graphically by checking the trace plots and through the use of the Gelman–Rubin test (Gelman et al., 2004). This creates a proportional scale reduction (PSR) factor for each parameter. Smaller PSR values reflect smaller between-chain variations, or greater convergence (should reach < 1.05).
Interpretation of the coefficients, especially with reference to the moderating effect.
This is an important topic. Having used centring, the meaning of the coefficients is altered. The change in the standard deviation of the dependent Y (Will) as a function of a one standard deviation change in the independent X (behaviour of colleagues, COL) can be interpreted at different values of the moderator (job satisfaction, JS) using the moderator function (COL*JS). At the zero mean of JS, a standard deviation increase in COL (sdcol) leads to a bcol standard deviation increase in Will. At one standard deviation above the mean value of JS (where JS = 1), an sdcol increase leads to a 2sdbcol increase in Will. At one standard deviation below the mean value of JS, a one standard deviation increase in COL leads to a -2sdbcol decrease in Will.
Our models.
We specified and empirically estimated three models in our main analysis, and these are explained and interpreted below (Table II). Convergence was achieved in all three models with PSR factors < 1.03 and excellent trace plot graphs (omitted owing to space constraints).
Model 0 (direct effects of X→Y only) identified that professionals being in favour of the DRG policy (i.e. a high COL) was positively associated with a willingness to implement (Will), both for psychologists (b = 0.42; 95 per cent CI = 0.35-0.48) and for psychiatrists (b = 0.37; 95 per cent CI = 0.29 – 0.45). In this model, a larger proportion of the variance was explained for psychologists (R^{2} = 17 per cent; 95 per cent CI = 12 – 23 per cent) than for psychiatrists (R^{2} = 14 per cent; 95 per cent CI = 8 – 20 per cent). However, as the 95 per cent CIs for the direct associations of the psychologists and the psychiatrists overlap, one cannot claim that the direct effect is different for psychologists and for psychiatrists.
Model 1 specifies SM as a mediator together with profession as a moderator variable (see Model 1 in Table II). The direct effects (COL→Will) had lower coefficients for both psychologists and psychiatrists than in Model 0. Specifically, in standardised form, the b coefficients decreased to 0.27 (from 0.42); 95 per cent CI = 0.19-0.34 and to 0.30 (from 0.37); 95 per cent CI = 0.22-0.37, respectively. Similarly, the unstandardised β coefficients decreased from 1.61 to 1.03 and from 1.45 to 1.24, respectively. The mediating effect of SM is significantly different from zero for both psychologists (β = 0.62; 95 per cent CI = 0.45-0.81) and for psychiatrists (β = 0.28; 95 per cent CI = 0.081 – 0.49; see also Figure 3). These results indicate that the direct impact (COL→Will) is not dissipated, suggesting partial mediation and a two-way process, both direct and indirect, of influence. Further, the mediating effect appears to be higher for psychologists than for psychiatrists. Their CIs only just overlap (the upper 95 per cent CI boundary for psychiatrists is 0.49, while the lower 95 per cent CI boundary for psychologists is 0.45). The R^{2} of the outcome (willingness) explained when SM is added more than doubles in the case of psychologists (from R^{2} = 17 to 39 per cent; 95 per cent CI = 32 – 45 per cent) and triples in the case of psychiatrists (from R^{2} = 14 to 45 per cent; 95 per cent CI = 37-51 per cent). Thus, the partially mediated relationship is strongly dependent on profession.
Model 2 specifies SM as a mediator together with both profession and JS as moderator variables (see Model 2 in Table II). The explained variance remained largely at the same levels for both psychologists (R^{2} = 37 per cent; 95 per cent CI = 31-44 per cent) and psychiatrists (R^{2} = 44 per cent; 95 per cent CI = 38-51 per cent) as in Model 1 (also see Figure 4). However, Model 2 per se does not unveil the exact manner in which the moderator JS operates to produce these results. One cannot assume that the moderation effects are in the same direction, of similar shape or that they have similar lower and upper bounds across the range of values of the moderator. To assess this, we generated a loop using the respondents’ moderator scores to test the direction, and the shape of the effects for the two groups. A loop is a sequence of repeated instructions, and the appendix provides the syntax used to estimate the loop (see Model 2 – under the heading “MODEL CONSTRAINT”). We used this loop to see how the effect evolves over a range of possible values. Our interest here was on the Likert-type moderator (JS) as we wanted to see its effect on the mediated relationship. We could not use the range of the original Likert scale that measured the construct as possible values because the moderator JS is centred (= cJS) with a mean of zero. Instead, we set upper and lower bounds of ±2 standard deviations from the mean (e.g. −2 to +2) as this will avoid outlier observations. We also used small steps (0.1) giving 40 steps from −2 to +2 to ensure sufficient cover between the upper and lower bounds (see Figure 5). As can be seen in Figure 4, JS, apparently, has a small but negative effect on SM for both psychologists and psychiatrists. Its influence on Will is only evident, and again small, for Group B. Here, however, the loop results (see Figure 5) reveal that the role of the moderator JS upon the Col→Will link, mediated by SM, varies considerably in terms of the direction, shape and the CI bounds of the influence. In more detail, with the psychologists (Group A), the influence of JS decreases, but never becomes negative. For the psychiatrists (Group B), the influence increases. Yet, this is initially negative and the lower 95 per cent confidence interval bound is only positive for respondents’ raw scores of 4 (satisfied) and 5 (very satisfied).
Loop generation.
Generating the loops helps to develop and refine theory. In our example case, JS attenuates the effect of colleagues’ behaviour indicating that the more satisfied psychologists are with their job the less interested they will be in agreeing to action. For psychiatrists, JS has its own direct positive influence on Will and only for those who are satisfied or very satisfied, a simultaneous accentuating effect in shaping their perceptions of the value of the policy.
Model 3 tests (Table II) whether the above findings can be sustained under the important condition of ignorability, and whether there are any (unaccounted for) confounders. Muthén (2011) argues that to be able to claim that effects are causal, it is not sufficient to use causally defined effects – rather, their identification requires stringent, unverifiable, assumptions. We have adopted a procedure developed by Muthén (2011) to simultaneously test, in a tripartite manner, for the confounding impact of ignored covariates as well as assess the sensitivity of the estimates. The basis of the procedure is as follows. Based upon Pearl’s (2009, 2012) mediation formula, the direct effect of X (for ease put in a binary form here; see Muthén, 2011 regarding how this is expressed) is calculated, given the covariate, of the difference between the outcomes when X = 1 and X = 0 when the mediator is held constant at the value it would obtain for the control group. The total indirect effects are defined following Robins (2003) as (Muthén, 2011), given the covariate, of the difference between the outcomes with X = 1 when the mediator changes from the value it would obtain in the X = 1 group to the value it would obtain in the X = 0 group.
Conducting the sensitivity analysis.
A sensitivity analysis (Imai et al., 2010b) is subsequently carried out, where the effects are computed for different fixed values of the residual covariance. The estimation commences from a residual correlation of zero (Muthén, 2011). We are interested in the indirect effect of COL (γ1*β1) (labelled g1Acol*b1Acol for Group A and g1Bcol*b1Bcol for Group B (Figure 2), and so, there is a need to control for any additional existing pathways. These relate to the indirect effect of the moderator (JS) (γ2* β1) and its interaction (COL*JS) (γ3* β1) on Y through SM. Specifically, for the two groups, these two controlled pathways become:
γ2*β1 (labelled g2Ajs*b1Ajs for Group A and g2Bjs*b1Bjs for Group B).
γ3*β1 (labelled g3Axz*b1Axz for Group A and g3Bxz*b1Bxz for Group B).
To control for the two additional pathways, a concurring tripartite estimation is required. Muthén (2011, pp. 39-40) provides a detailed and technically complex explanation for the single-mediation estimation. In a double-moderated mediation model, any ignored covariates affect each pathway differently, and so, the estimation of the mediation effect γ1*β1 (which is our primary focus) must account for ignored covariates in all three pathways. Figure 2 demonstrates the location of each pathway for the concurring tripartite estimation. The numerical sensitivity is estimated at the same time, and this supplies the 95 per cent CI upper and lower bounds of the unbiased mediation effects for each pathway (see also Model 3 in Table 2). The appendix provides the syntax used (see under the heading “MODEL CONSTRAINT” in Model 3). Although our primary focus is on estimating indAcol and indBcol, the syntax demonstrates how to estimate the additional pathways.
What is the outcome of testing for non-accounted confounders and the sensitivity analysis?
The results showed that the “purified” mediational effects for the pathway through SM, (unstandardised β): γ1col* β1col (= γ1* β1) are for the psychologists 0.59 with 95 per cent CI: 0.42-0.78; and for the psychiatrists 0.24 with 95 per cent CI: 0.04-0.45. Thus, the effects are always positive for both groups. These results are not that dissimilar to the original mediation estimated effects of societal meaningfulness (β = 0.62; 95 per cent CI = 0.45 – 0.81 for psychologists and β = 0.28; 95 per cent CI = 0.08- 0.49 psychiatrists – see Models 2 and 3 in Table II). The reductions in the mediation effect because of the previously ignored covariates are not large. Nonetheless, the explained variances are substantially reduced for both psychologists (R^{2} = 20 per cent (from 37 per cent); 95 per cent CI = 14-26 per cent) and for psychiatrists (R^{2} = 16 per cent (from 44 per cent); 95 per cent CI = 10-22 per cent). This decrease is 17 per cent for Group A and 28 per cent for Group B and suggests that unaccounted confounders linked to profession-related variables play a stronger role in Group B. There are also still clear effects of professional context moderation in terms of the mediation pathway (their 95 per cent CIs do not overlap although they are close with end values of 0.42 and 0.45). The sensitivity of the interaction pathway γ3*β1 (for psychologists: −0.06 with 95 per cent CI: −0.23-+ 0.10; and for psychiatrists: 0.13 with 95 per cent CI: −0.06-+ 0.33) crossed zero in both groups. We interpret this as indicating a lack of simultaneous effect from the confounding influence of covariates upon the mediation because of the interaction XZ, and that this happens in both professions. The sensitivity of pathway γ2*β1 (psychologists: 0.04 with 95 per cent CI: 0.01-0.08; psychiatrists: 0.05 with 95 per cent CI: 0.01-0.08) is always positive for both groups. We interpret this as indicating a simultaneous effect in terms of the confounding influence of covariates upon the mediation because of Z, and this occurs equally for both professions.
Our conclusion is that the SM mediation effects pass the sensitivity test, and the changes to the coefficients are small. This was also the case with the moderating effect of professional context although the explained variance has decreased. The decrease in R^{2} shows that the original error term
Discussion
We aimed to provide an example of how to conceptualise, specify and estimate models when needing to simultaneously account for double-moderated mediation involving nominal and continuous (Likert type) variables (see Cox, 1980 and Matell and Jacoby, 1972 for the properties of Likert scale measures). We also address reservations concerning the biases inherent to the implicit sequential ignorability assumption that is regularly made in management research. Management researchers regularly address similar contexts and an awareness of what solutions are available is important. Our use of a context case highlights the complexities that regularly face management researchers and new methods, such as proposed here, are best unveiled through similar detailed explanations.
In so doing, we first demonstrate the simultaneous functioning, and concurring impact, of the relevant causal pathways with both mediation and moderation natures. Using such moderator variables offer to the intellectual debate something beyond and above their contribution as individual elements. For instance, the selected variables enabled us to contrast facets of institutional influence and of self-influence on individual decisions taken and capture their combined effects with respect to simpler mediation models. Such an approach exposes the intermingled nature of the impacts of contexts and the complex nature of resulting causal pathways of concurring influence.
Secondly, our empirical implementation provides a way to resolve important empirical problems facing researchers by applying novel statistical techniques. We demonstrate in our modelling how to use Bayesian statistics and their value when accounting for both dichotomous and continuous moderators in a mediation context.
Thirdly, we demonstrate how to formulate such a model for an investigation of confounding effects. We demonstrated, by testing for variables that are conventionally ignored, how the explained variance of the dependent variable Y can change substantially. Here, Imai et al. (2010b) and Muthén (2011) argued that the assessed impact of confounding effects should be supplemented by an estimation of the sensitivity of these results. Our study is one of the first to investigate the problematic issue of confounder ignorability that plagues much past management research (Antonakis et al., 2010). We concentrated on only the M–Y relationship and clarified the number of assumptions inherent in such modelling efforts.
On the basis of our findings, we would advise researchers to carefully address the following issues when conceptualising their theoretical problem as a double-moderated model mediation:
What is the nature of the mediation paths, independent and dependent variables and pattern of responses for these variables? Categorical or nominal variables introduce important statistical estimation issues, especially when the interaction terms or mediation variables are non-continuous. Taking on from this, care is needed in checking the pattern of responses (these may also be censored or truncated). In addition, if fewer points are used by respondents (say, four responses on a seven-point Likert scale), the responses cannot be assumed as continuous. Patterns of missing values are also a pertinent aspect.
What is the exact nature of the moderation variables, and what is the pattern of responses?
What is the nature of the interface among the moderator variables? One should attempt to identify any multilevel effects among moderators and/or the mediator and the dependent variable (Preacher et al., 2007). The interface between two simultaneously controlled moderator variables may hide substantial conceptual causal links between them and disguise data pattern issues. Constructs/variables on different levels (e.g. level 0/1/2) all inherit variance that is attributable to their conceptual location and theoretical role. Thus, model conceptualisation and specification at the same level will inevitably confound variances attributable to the conceptual level of the construct/variable. Consequently, double-moderated mediation models should be used with care and researchers should be alert to clustering effects that are inherited by variables/latent constructs on different levels.
What are the multipartite pathways of concurring unaccounted covariates? Research should specify and estimate mediation effects while also simultaneously controlling for theoretically driven co-influencing pathways.
What are the correct interpretations of the identified coefficients? Grand-centring effects lead to different interpretations than group-centring effects. Similarly, interactions bear a meaning that is pertinent not only to the underlying nature of the involved variables but also to the distribution of respondent responses. Again, attention needs to be given to interpretation difficulties with higher-order interactions. Confusion can easily result and theoretical interpretations become less than robust.
What analytical approaches should one use? We used a combination of traditional and Bayesian estimation approaches to reap the benefits of both. Research can benefit greatly from the increased sophistication and precision allowed by Bayesian approaches. For instance, research could use different informative priors (i.e. different averages) and/or different breadths (e.g. narrower versus wider standard deviations) as well as distributional shapes to contrast alternative theoretical stances.
What are the direction, the shape and the lower/upper bounds across the entire range of moderator values?
Assumptions inherent in the model and potential biases require testing and correction. These may be strong and sometimes implausible, make modelling efforts complex and require researcher energy but are important to secure accuracy of estimates.
Having highlighted issues for consideration (see also Kline, 2015), we see the current endeavour as only a potential stepping-stone towards improved conceptualisation and reduction in analytical errors that can all, too easily, occur.
Figures
Variables, means and (standardized) correlation coefficients
Variable | Y(Will) | X(COL) | M(SM) | Z(JS) |
---|---|---|---|---|
Y = Willingness to implement (Will)^{a} | 0.24/0.00^{b} | |||
X = Colleagues’ behaviour (COL) | 0.44/0.35 | 0.49/0.45 | ||
M = Societal meaningfulness (SM)^{a} | −0.57/−0.59 | −0.34/−0.11^{*} | −0.37/0.00^{b} | |
Z = Job satisfaction (JS) | 0.18/0.18 | 0.22/0.17 | −0.17/−0.10^{*} | 4.22/3.95 ^{b} |
For Group A (psychologists) and Group B (psychiatrists); all correlation coefficients p < 0.001, unless otherwise stated; in each table cell, the left-hand value relates to estimates for Group A (psychologists) and the right-hand value to estimates for Group B (psychiatrists). The means for Job satisfaction (JS) are based on a single item, while the means for COL are calculated from a formative index. The latent mean scores for Will and SM were obtained using CFA, where the latent mean scores for Group A are estimated but fixed at zero by default for Group B;
*p < 0.05;
^{a} as these constructs are latent variables, the means are standardized with Group B (psychiatrists) being used as the reference group;
^{b} significant group difference, p < 0.01
Structural paths: Unstandardized β (Standardized b) parameter estimates per group
Structural path (coefficient) | Group A | Group B | ||
---|---|---|---|---|
β (b) | b 95% C.I. | β (b) | b 95% C.I. | |
Model 0 (No Mediation) | ||||
Intercept Will (β_{0i}) | 15* (21*) | 0.09-0.33 | 0^{a} | – |
COL → Will (β_{2}) | 1.61* (0.42*) | 0.35-0.48 | 1.45* (0.37*) | 0.29-0.45 |
Residual variance Will (e_{1}) | 0.42* (0.82*) | 0.76-0.87 | 0.42* (0.86*) | 0.79-0.91 |
Explained R^{2} of Will | 0.17 | 0.12-0.23 | 0.14 | 0.08-0.20 |
Model 1 (Single moderated mediation) | ||||
Intercept Will (β_{0}) | 0.05 (0.07) | −0.04-0.18 | 0^{a} | – |
SM → Will (β_{1}) | −0.35* (−0.47*) | −0.54-−0.40 | −0.46* (−0.56*) | −0.61-−0.49 |
COL → Will (β_{2}) | 1.03* (0.27*) | 0.19-0.34 | 1.24* (0.30*) | 0.22-0.37 |
Intercept SM (γ_{0}) | −0.30* (−0.32*) | −0.44- −0.20 | 0^{a} | – |
COL → SM (γ_{1}) | −1.76* (−0.34*) | −0.41- −0.27 | −0.60* (−0.12*) | −0.20- −0.03 |
Residual variance Will (e_{1}) | 0.30* (0.60*) | 0.54-0.67 | 0.30* (0.55*) | 0.49-0.62 |
Residual variance SM (e_{2}) | 0.78* (0.87*) | 0.82-0.92 | 0.78* (0.98*) | 0.95-0.99 |
Explained R^{2} of Will | 0.39 | 0.32-0.45 | 0.45 | 0.37-0.51 |
Explained R^{2} of SM | 0.12 | 0.07-0.17 | 0.01 | 0.001-0.04 |
Indirect (mediation β effect) (COL→SM→Will) |
0.62* | 0.45-0.81 | 0.28* | 0.08-0.49 |
Model 2 (Double-moderated mediation) | ||||
Intercept Will (β_{0}) | 0.03 (0.05) | −0.05-1.16 | 0^{a} | – |
SM → Will (β_{1}) | −0.35* (−0.47*) | −0.54-−0.40 | −45* (−0.55*) | −0.61- −0.48 |
COL → Will (β_{2}) | 0.97* (0.25*) | 0.17-0.32 | 1.17* (0.28*) | 0.21-0.35 |
JS → Will (β_{3}) | 0.03 (0.04) | −0.03-0.12 | 0.08* (0.09*) | 0.02-0.16 |
COLxJS → Will (β_{4}) | 0.22 (0.05) | −0.02-–0.13 | 0.24 (0.05) | −0.01-0.12 |
Intercept SM (γ_{0}) | −28 (−0.30) | −0.42- −0.18 | 0^{a} | – |
COL → SM (γ_{1}) | −1.68* (−0.32*) | −0.39- −0.24 | −0.53* (−0.10*) | −0.19- −0.01 |
JS → SM (γ_{2}) | −0.12* (−0.11*) | −0.19- −0.02 | −0.11* (−0.10*) | −0.19- −0.02 |
COLxJS → SM (γ_{3}) | 0.17 (0.03) | −0.05-0.11 | −0.31 (−0.05) | −0.14-0.02 |
Residual variances Will (e_{1}) | 0.30* (0.62*) | 0.55-0.68 | 0.30* (0.55*) | 0.48-0.62 |
Residual variances SM (e_{2}) | 0.78* (0.87*) | 0.82-92 | 0.78* (0.97*) | 0.94-0.99 |
Explained R^{2} of Will | 0.37 | 0.31-0.44 | 0.44 | 0.38-0.51 |
Explained R^{2} of SM | 0.12 | 0.08-0.17 | 0.02 | 0.008-0.06 |
Model 3 (Sensitivity of mediation effects in the double-moderated mediation) | ||||
Mediation pathway: γ1col*β1col | 0.59* | 0.42-0.78 | 0.24* | 0.04-0.45 |
Controlled pathway: γ3xz*β1xz | −0.06 | −0.23-0.10 | 0.13 | −0.06-0.35 |
Controlled pathway: γ2js*β1js | 0.04* | 0.01-0.08 | 0.05* | 0.01-0.09 |
Explained R^{2} of Will | 0.20 | 0.14-0.26 | 0.16 | 0.10-0.22 |
Explained R^{2} of SM | 0.13 | 0.08-0.19 | 0.03 | 0.00-0.06 |
Fit indices | Model 0 | Model 1 | Model 2 | Model 3 |
Df | 15 | 35 | 50 | 45 |
Bayesian posterior predictive p-value | 0.001 | 0.000 | 0.000 | 0.000 |
Deviance (DIC) | 9703.891 | 17264.512 | 18371.851 | 18.003 |
Estimated number of parameters (pD) | 11.951 | 25.335 | 50.764 | 32.499 |
Bayesian (BIC) | 9784.543 | 17457.440 | 18623.651 | 18251.856 |
Group A | ||||
Posterior predictive p-value | 0.055 | 0.000 | 0.000 | |
Deviance (DIC) | 4013.032 | 8430.328 | 10403.997 | |
Estimated number of parameters (pD) | 8.975 | 3.529 | 18.579 | |
Group B | ||||
Posterior predictive p-value | 0.011 | 0.000 | 0.000 | |
Deviance (DIC) | 3825.896 | 7051.773 | 7885.651 | |
Estimated number of parameters (pD) | 6.111 | 28.655 | −3.559 |
Group A = psychologists; Group B = psychiatrists;
^{a} these parameters are fixed at zero so that they can serve as a reference category
Appendix – input syntax
References
Ajzen, I. (1991), “The theory of planned behavior”, Organizational Behavior and Human Decision Processes, Vol. 50 No. 2, pp. 179-211.
Antonakis, J., Bendahan, S., Jacquart, P. and Lalive, R. (2010), “On making causal claims: a review and recommendations”, The Leadership Quarterly, Vol. 21 No. 6, pp. 1086-1120.
Antonakis, J., Bendahan, S., Jacquart, P. and Lalive, R. (2014), “Causality and endogeneity: problems and solutions”, in Day, D.V. (Ed.), The Oxford Handbook of Leadership and Organizations, Oxford University Press, New York, NY.
Ashforth, B.E. and Mael, F. (1989), “Social identity theory and the organization”, Academy of Management Review, Vol. 14 No. 1, pp. 20-39.
Ashforth, B.E., Harrison, S.H. and Corley, K.G. (2008), “Identification in organizations: an examination of four fundamental questions”, Journal of Management, Vol. 34 No. 3, pp. 325-374.
Baron, R.M. and Kenny, D.A. (1986), “The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations”, Journal of Personality and Social Psychology, Vol. 51 No. 6, pp. 1173-1182.
Bascle, G. (2008), “Controlling for endogeneity with instrumental variables in strategic management research”, Strategic Organization, Vol. 6 No. 3, pp. 285-327.
Bollen, K.A. (1989), Structural Equations with Latent Variables, John Wiley & Sons, New York, NY.
Bollen, K.A. and Pearl, J. (2013), “Eight myths about causality and structural equations models”, in Morgan, S.L. (Ed.), Handbook of Causal Analysis for Social Research, Springer, Chapter, Vol. 15, pp. 301-328.
Bullock, J.G., Green, D.P. and Ha, S.E. (2010), “Yes, but what’s the mechanism? (Don’t expect an easy answer)”, Journal of Personality and Social Psychology, Vol. 98 No. 4, pp. 550-558.
Clougherty, J.A., Duso, T. and Muck, J. (2015), “Correcting for self-selection based endogeneity in management research review, recommendations and simulations”, Organizational Research Methods, p. 1094428115619013.
Congdon, P. (2006), Bayesian Statistical Modelling, Wiley Series in Probability and Statistics, John Wiley & Sons, Chichester.
Cox, E.P. (1980), “The optimal number of response alternatives for a scale: a review”, Journal of Marketing Research, Vol. 17 No. 4, pp. 407-442.
Deary, I.J., Agius, R.M. and Sadler, A. (1996), “Personality and stress in consultant psychiatrists”, International Journal of Social Psychiatry, Vol. 42 No. 2, pp. 112-123.
Dent, E.B. and Goldberg, S.G. (1999), “Challenging ‘resistance to change’”, The Journal of Applied Behavioral Science, Vol. 35 No. 1, pp. 25-41.
Garen, J. (1984), “The returns to schooling: a selectivity bias approach with a continuous choice variable”, Econometrica, Vol. 52 No. 5, pp. 1199-1218.
Diamantopoulos, A. and Winklhofer, H.M. (2001), “Index construction with formative indicators: an alternative to scale development”, Journal of Marketing Research, Vol. 38 No. 2, pp. 269-277.
DiMaggio, P.J. and Powell, W.W. (1983), “The iron cage revisited: institutional isomorphism and collective rationality in organizational fields”, American Sociological Review, Vol. 48 No. 2, pp. 147-160.
DiMaggio, P.J. and Powell, W.W. (1991), “Introduction”, in Powell, W.W. and DiMaggio, P.J. (Eds), The New Institutionalism in Organization Analysis, University of Chicago Press, Chicago, IL, pp. 1-38.
Edwards, J.R. and Lambert, L.S. (2007), “Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis”, Psychological Methods, Vol. 12 No. 1, pp. 1-22.
Emsley, R.A., Dunn, G. and White, I.R. (2010), “Modelling mediation and moderation of treatment effects in randomised controlled trials of complex interventions”, Statistical Methods in Medical Research, Vol. 19 No. 3, pp. 237-270.
Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research, Addison-Wesley, Reading, MA.
Fishbein, M. and Ajzen, I. (2009), Predicting and Changing Behavior, Taylor & Francis, London.
Fishman, G.S. (1995), Monte Carlo: Concepts, Algorithms, and Applications, Springer, New York, NY.
Ford, J.D., Ford, L.W. and D’Amelio, A. (2008), “Resistance to change: the rest of the story”, Academy of Management Review, Vol. 33 No. 2, pp. 362-377.
Freidson, E. (2001), Professionalism: The Third Logic, Cambridge University Press, Cambridge, MA.
Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B. (2004), Bayesian Data Analysis, 2nd ed., Chapman & Hall CRC, London.
Gilks, W.R., Richardson, S. and Spiegelhalter, D.J. (1996), Markov Chain Monte Carlo in Practice, Chapman & Hall/CRC.
Griffin, R.J., Dunwoody, S. and Neuwirth, K. (1999), “Proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors”, Environmental Research, Vol. 80 No. 2, pp. S230-S245.
Guthrie, E., Tattan, T., Williams, E., Black, D. and Bacliocotti, H. (1999), “Sources of stress, psychological distress and burnout in psychiatrists’ comparison of junior doctors, senior registrars and consultants”, Psychiatric Bulletin, Vol. 23 No. 4, pp. 207-212.
Hayes, A.F. (2013), Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, The Guilford Press, New York, NY, pp. 1-507, Model templates are available at: http://afhayes.com/public/templates.pdf (accessed 21 October 2016).
Heckman, J. (1979), “Sample selection bias as a specification error”, Econometrica, Vol. 47 No. 1, pp. 153-161.
Higgs, M. and Rowland, D. (2005), “All changes great and small: exploring approaches to change and its leadership”, Journal of Change Management, Vol. 5 No. 2, pp. 121-151.
Holland, P.W. (1988), “Causal inference, path analysis, and recursive structural equations models”, Sociological Methodology, Vol. 18, pp. 449-484.
Hood, C. (1991), “A public management for all seasons”, Public Administration, Vol. 69 No. 1, pp. 3-19.
Imai, K., Keele, L. and Tingley, D. (2010a), “A general approach to causal mediation analysis”, Psychological Methods, Vol. 15, pp. 309-334.
Imai, K., Keele, L. and Yamamoto, T. (2010b), “Identification, inference, and sensitivity analysis for causal mediation effects”, Statistical Science, Vol. 25, pp. 51-71.
Imai, K., Keele, L., Tingley, D. and Yamamoto, T. (2011), “Unpacking the black box of causality: learning about causal mechanisms from experimental and observational studies”, American Political Science Review, Vol. 105 No. 4, pp. 765-789.
Janssen, O. and Van Yperen, N.W. (2004), “Employees’ goal orientations, the quality of leader-member exchange, and the outcomes of job performance and job satisfaction”, Academy of Management Journal, Vol. 47 No. 3, pp. 368-384.
Jo, B., Stuart, E.A., MacKinnon, D.P. and Vinokur, A.D. (2011), “The use of propensity scores in mediation analysis”, Multivariate Behavioral Research, Vol. 46 No. 3, pp. 425-452.
Kimberly, J.R., De Pouvourville, G. and Thomas, A.D.A. (2009), The Globalization of Managerial Innovation in Health Care, Cambridge University Press, Cambridge, MA.
Kline, R.B. (2011), Principles and Practice of Structural Equation Modeling, Guilford Press, New York, NY.
Kline, R.B. (2015), “The mediation myth”, Basic and Applied Social Psychology, Vol. 37 No. 4, pp. 202-213.
Kruschke, J.K., Aguinis, H. and Joo, H. (2012), “The time has come: Bayesian methods for data analysis in the organizational sciences”, Organizational Research Methods, Vol. 15 No. 4, pp. 722-752.
Lance, C.E. (1988), “Residual centering, exploratory and confirmatory moderator analysis, and decomposition of effects in path models containing interaction effects”, Applied Psychological Measurement, Vol. 12 No. 2, pp. 163-175.
Li, M. (2013), “Using the propensity score method to estimate causal effects a review and practical guide”, Organizational Research Methods, Vol. 16 No. 2, pp. 188-226.
Lindell, M.K. and Whitney, D.J. (2001), “Accounting for common method variance in cross-sectional research designs”, Journal of Applied Psychology, Vol. 86 No. 1, pp. 114-121.
MacKinnon, D.P., Fairchild, A.J. and Fritz, M.S. (2007), “Mediation analysis”, Annual Review of Psychology, Vol. 58 No. 1, pp. 593-614.
MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G. and Sheets, V. (2002), “A comparison of methods to test mediation and other intervening variable effects”, Psychological Methods, Vol. 7 No. 1, pp. 83-104.
Markus, H.R. and Kitayama, V. (1991), “Culture and the self: implications for cognition, emotion, and motivation”, Psychological Review, Vol. 98 No. 2, pp. 224-253.
Matell, M.S. and Jacoby, J. (1972), “Is there an optimal number of alternatives for likert-scale items? effects of testing time and scale properties”, Journal of Applied Psychology, Vol. 56 No. 6, p. 506.
May, D.R., Gilson, R.L. and Harter, L.M. (2004), “The psychological conditions of meaningfulness, safety and availability and the engagement of the human spirit at work”, Journal of Occupational and Organizational Psychology, Vol. 77 No. 1, pp. 11-37.
Metselaar, E.E. (1997), “Assessing the willingness to change: construction and validation of the DINAMO”, Doctoral dissertation, Free University of Amsterdam.
Meyers, M.K. and Vorsanger, S. (2003), “Street-level bureaucrats and the implementation of public polic”, in Peters, B.G. and Pierre, J. (Eds), Handbook of Public Administration, Sage, London, pp. 245-254.
Muthén, B. (2011), “Applications of causally defined direct and indirect effects in mediation analysis using Sem in Mplus”, Unpublished Manuscript, pp. 1-110, available at: www.statmodel.com/download/causalmediation.pdf
Muthén, B. and Asparouhov, T. (2012), “Bayesian sem: a more flexible representation of substantive theory”, Psychological Methods, Vol. 17 No. 3, pp. 313-335.
Muthén, L. and Muthén, B. (1998/2014), Mplus User’s Guide, 7th ed., Muthén & Muthén, Los Angeles, CA.
Muzio, D., Brock, D. and Suddaby, R. (2013), “Professions and institutional change: towards an institutionalist sociology of the professions”, Journal of Management Studies, Vol. 50 No. 5, pp. 699-721.
Nagy, M.S. (2002), “Using a single-item approach to measure facet job satisfaction”, Journal of Occupational and Organizational Psychology, Vol. 75 No. 1, pp. 77-86.
Neukrug, E.S. (2011), The World of the Counselor: An Introduction to the Counseling Profession, Brooks Cole, Belmont, CA.
Norris, J.R. (1998), Markov Chains, Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, New York, NY.
Onyett, S., Pillinger, T. and Muijen, M. (1997), “Job satisfaction and burnout among members of community mental health teams”, Journal of Mental Health, Vol. 6 No. 1, pp. 55-66.
Palm, I., Leffers, F., Emons, T., Van Egmond, V. and Zeegers, S. (2008), De Ggz Ontwricht: Een Praktijkonderzoek Naar De Gevolgen Van Het Nieuwe Zorgstelsel in De Geestelijke Gezondheidszorg, SP, Den Haag.
Pearl, J. (2001), “Direct and indirect effects”, in Breese, J. and Koller, D. (Eds), Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA, pp. 411-420.
Pearl, J. (2009), Causality: Models, Reasoning, and Inference, 2nd ed., Cambridge University Press, New York, NY.
Pearl, J. (2012), “The mediation formula: a guide to the assessment of causal pathways in nonlinear models”, in Berzuini, C., Dawid, P. and Bernardinelli, L. (Eds), Causality: Statistical Perspectives and Applications, John Wiley and Sons, Chichester, pp. 151-179.
Piderit, S.K. (2000), “Rethinking resistance and recognizing ambivalence: a multidimensional view of attitudes toward an organizational change”, The Academy of Management Review, Vol. 25 No. 4, pp. 783-794.
Podsakoff, P.M., MacKenzie, S.B., Lee, J.Y. and Podsakoff, N.P. (2003), “Common method biases in behavioral research: a critical review of the literature and recommended remedies”, Journal of Applied Psychology, Vol. 88 No. 5, pp. 879-903.
Powell, W.W. and Colyvas, J.A. (2008), “Microfoundations of institutional theory”, in Greenwood, R., Oliver, C., Sahlin, K. and Suddaby, R. (Eds), The SAGE Handbook of Organizational Institutionalism, Sage, London, pp. 276-298.
Preacher, K.J. and Hayes, A.F. (2008), “Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models”, Behavior Research Methods, Vol. 40 No. 3, pp. 879-891.
Preacher, K.J., Rucker, D.D. and Hayes, A.F. (2007), “Addressing moderated mediation hypotheses: theory, methods, and prescriptions”, Multivariate Behavioral Research, Vol. 42 No. 1, pp. 185-227.
Richardson, H.A., Simmering, M.J. and Sturman, M.C. (2009), “A tale of three perspectives: examining post hoc statistical techniques for detection and correction of common”, Method Variance’ Organizational Research Methods, Vol. 12 No. 4, pp. 762-800.
Robins, J.M. (2003), “Semantics of causal DAG models and the identification of direct and indirect effects”, in Green, P., Hjort, N.L. and Richardson, S. (Eds), Highly Structured Stochastic Systems, Oxford University Press, New York, NY, pp. 70-81.
Robins, J.M. and Greenland, S. (1992), “Identifiability and exchangeability for direct and indirect effects”, Epidemiology, Vol. 3 No. 2, pp. 143-155.
Sargan, J.D. (1958), “The estimation of economic relationships using instrumental variables”, Econometrica, Vol. 26 No. 3, pp. 393-415.
Semnet (2016), “SEMNET: structural equation modeling discussion network”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 1, p. 2, Exchanges between researchers (accessed online 20.10.2016) – also see Rigdon, E.E. (1994).
Smullen, A. (2013), “Institutionalizing professional conflicts through financial reforms: the case of DBC’s in Dutch mental healthcare”, in Noordegraaf, M. and Steijn, A.J. (Eds), Professionals under Pressure: The Reconfiguration of Professional Work in Changing Public Services, Amsterdam University Press, Amsterdam.
Sobel, M.E. (2008), “Identification of causal parameters in randomized studies with mediating variables”, Journal of Educational and Behavioral Statistics, Vol. 33 No. 2, pp. 230-251.
Triandis, H.C. (1989), “The self and social behavior in differing cultural contexts”, Psychological Review, Vol. 96 No. 3, pp. 506-520.
Tummers, L.G. (2012), “Policy alienation of public professionals: the construct and its measurement”, Public Administration Review, Vol. 72 No. 4, pp. 516-525.
Tyler, T.R. and Blader, S.L. (2001), “Identity and cooperative behavior in groups”, Group Processes and Intergroup Relations, Vol. 4 No. 3, pp. 207-226.
Valeri, L. and VanderWeele, T.J. (2013), “Mediation analysis allowing for exposure–mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS Macros”, Psychological Methods, Vol. 18 No. 2, pp. 137-150.
Van de Schoot, R., Kaplan, D., Denissen, J., Asendorpf, J.B., Neyer, F.J. and Van Aken, M.A.G. (2013), “A gentle introduction to Bayesian analysis: applications to research in child development”, Child Development, doi: 10.1111/cdev.12169.
Van de Schoot, R., Lugtig, P. and Hox, J. (2012), “A checklist for testing measurement invariance”, European Journal of Developmental Psychology, Vol. 9 No. 4, pp. 37-41.
VanderWeele, T.J. (2010), “Bias formulas for sensitivity analysis for direct and indirect effects”, Epidemiology, Vol. 21 No. 4, pp. 540-551.
VanderWeele, T.J. and Shpitser, I. (2013), “On the definition of a confounder”, Annals of Statistics, Vol. 41 No. 1, pp. 196-220.
VanderWeele, T.J. and Vansteelandt, S. (2009), “Conceptual issues concerning mediation, interventions and composition”, Statistics and Its Interface, Vol. 2 No. 4, pp. 457-468.
Wang, L. and Preacher, K.J. (2015), “Moderated mediation analysis using Bayesian methods”, Structural Equation Modeling, Vol. 22 No. 2, pp. 249-263.
Williams, L.J., Hartman, N. and Cavazotte, F. (2010), “Method variance and marker variables: a review and comprehensive CFA marker technique”, Organizational Research Methods, Vol. 13 No. 3, pp. 477-514.
Yamamoto, T. (2012), “Understanding the past: statistical analysis of causal attribution”, American Journal of Political Science, Vol. 56 No. 1, pp. 237-256.
Yuan, Y. and MacKinnon, D.P. (2009), “Bayesian mediation analysis”, Psychological Methods, Vol. 14 No. 4, pp. 301-322.
Further reading
Hamilton, B.H. and Nickerson, J.A. (2003), “Correcting for endogeneity in strategic management research”, Strategic Organization, Vol. 1 No. 1, pp. 51-78.
Levin, I.P., Schneider, S.L. and Gaeth, G.J. (1998), “All frames are not created equal: a typology and critical analysis of framing effects”, Organizational Behavior and Human Decision Processes, Vol. 76 No. 2, pp. 149-188.
Muller, D., Judd, C.M. and Yzerbyt, V.Y. (2005), “When moderation is mediated and mediation is moderated”, Journal of Personality and Social Psychology, Vol. 89 No. 6, pp. 852-863.
Winship, C. and Morgan, S.J. (1999), “The estimation of causal effects from observational data”, Annual Review of Sociology, Vol. 25 No. 1, pp. 659-706.
Acknowledgements
The author acknowledges the data which were provided by Lars Tummers who was supported by the Netherlands Organization for Scientific Research (NWO) through grant VENI-451-14-004. Lars Tummers and Rens van de Schoot had also worked on a work-in-progress version of this article.