CitationDownload as .RIS
Emerald Publishing Limited
Copyright © 2019, Emerald Publishing Limited
Meta-analysis for audit research
This special issue comprises four articles – two use common meta-analysis techniques to perform a quantitative review of topics in the audit research literature (Alareeni, 2018; Durand, 2018), one is a commentary that encourages audit researchers to use a newer technique called meta-regression (Hay and Knechel, 2017) and highlights the potential role that a well-designed meta-analysis can play in informing evidence-based standard setting (Hay, 2018) and the last is not a meta-analysis per se but does introduce to audit researchers a quantitative technique called citation count regression that, like meta-analysis, can be used to enhance a narrative literature review (Staszkiewicz, 2018). All of the articles focus, for the most part, on the archival audit literature.
It is heartening to observe growth in the number of meta-analysis studies published in accounting and audit research journals (Khlif and Chalmers, 2015; Hay, 2018) and to know that this special issue will add to the numbers. That being said, there continues to be a paucity of meta-analysis studies focused on the experiment-based audit literature. This is perplexing as the advantage of meta-analysis should be evident to audit researchers. Specifically, studies of audit judgment and decision-making (JDM) typically draw conclusions from small samples of difficult-to-access participants (e.g. experienced auditors), so, by combining the results of many individual studies, meta-analysis offers researchers more statistical power than any individual study to detect whether differences between groups (e.g. control group vs treatment group) are greater than chance. An audit researcher could use meta-analysis to determine not only whether the literature as a whole has rejected the null hypothesis that a manipulated independent variable, such as client pressure, has no effect on, for example, auditors’ willingness to waive an audit adjustment, but also the magnitude of this effect and interventions, such as accountability, that moderate it.
With more than 300 experiment-based audit studies published in the four most highly regarded general interest accounting journals (TAR, CAR, AOS and JAR; Trotman et al., 2011) and countless studies published in other well-regarded journals, the potential for meta-analysis to offer new insights to audit researchers is substantial. So, why is it so rare to find a meta-analysis focused on the experiment-based audit literature?
My co-author and I asked a similar question when we attempted to use meta-analysis to review the archival audit committee independence literature (Pomeroy and Thornton, 2008). One of our posited explanations for the low number of meta-analysis studies in accounting, relative to other business disciplines, was diversity in research design across studies making it difficult, if not futile, to meaningfully combine the results of individual studies that use a variety of measures to capture the same underlying constructs (e.g. financial reporting quality). Of course, this is a barrier that can be overcome by focusing the meta-analysis on the source of the variation in effect sizes across studies (e.g. differences in the measurement of the dependent variable) rather than on the overall effect size in isolation. However, an argument could be made that extreme research design diversity across experiment-based audit studies poses an insurmountable barrier to conducting a meta-analysis of the audit JDM literature. Below, I describe three hypothetical meta-analysis studies to make the argument that research design differences across experiment-based audit studies do not represent an insurmountable barrier to conducting a meta-analysis.
First, consider a hypothetical meta-analysis focused on a commonly investigated dependent variable in experiment-based audit studies, such as auditors’ propensity to propose audit adjustments. The sizable number of studies that have captured this outcome variable potentially makes it a good candidate for meta-analysis. However, there exists diversity in its measurement – some studies capture it by asking participants to record a specific adjustment amount (Ng and Tan, 2003), while others use a binary measure (Nelson et al., 2005), an ordinal measure (Libby and Kinney, 2000), a Likert-type scale (Abdolmohammadi and Wright, 1987) or some combination of these measures.
Meta-analysis can accommodate such variable measurement differences by using a random-effects model that accounts for variation in the true effect size across studies and by converting the effect sizes of individual studies to a standardized metric, such as Hedges’ g (Borenstein et al., 2009). However, in audit JDM research, research design differs across studies not only in terms of variable measurement but also in many other respects – main task (e.g. decision to propose vs waive audit adjustments), focal accounting issue (e.g. inventory obsolescence vs revenue recognition), participant group (e.g. audit seniors vs audit managers) and time period (e.g. pre-SOX vs post-SOX), to name a few.
A well-designed meta-analysis can quantify the impact of these research design differences and the researcher would respond by focusing the meta-analysis on investigating them with subgroup analyses rather than on emphasizing the overall effect size (Borenstein et al., 2009). Still, as the number of subgroups expands and as researchers seek to focus the meta-analysis on comparing studies that vary on only a few features (e.g. a subgroup analysis of the focal accounting issue only for studies with audit managers as the participant group and that capture a binary outcome variable), it becomes increasingly difficult to detect differences (if any) across a small sample of focal studies because of low power (Borenstein et al., 2009).
Next, consider a hypothetical meta-analysis focused on a commonly investigated manipulated independent variable in experiment-based audit research, such as engagement risk. One of the key decisions the researcher would need to make when preparing the meta-analysis is the study inclusion criteria – the scope of the meta-analysis. While some studies seek to manipulate the broad construct of engagement risk (Hackenbrack and Nelson, 1996), most studies seek to manipulate a specific factor, such as management’s negotiation style (Hatfield et al., 2008). This factor could be characterized as capturing a feature of management’s attitude, a factor that would influence auditors’ assessment of client business risk – a component of engagement risk (Colbert et al., 1996). The researcher could develop a study inclusion criteria that scopes in all studies that manipulate an independent variable that could be characterized as an engagement risk factor. Alternatively, the researcher could narrow the scope of the meta-analysis to include only studies that manipulate a feature of client business risk or to include only studies that manipulate client business risk factors that relate to management. Hence, the researcher would need to develop an appropriate inclusion criteria that captures only studies that are relevant to the research question that they seek to address with the meta-analysis, but it is not necessary that the researcher includes in the scope of the meta-analysis only studies that measure the outcome variable the exact same way or that use the exact same description for the manipulated independent variable. Such a restrictive study inclusion criteria would inevitably lead the researcher to conclude that there is an insufficient number of studies to conduct a meaningful meta-analysis.
Finally, consider a hypothetical meta-analysis that focuses on another commonly investigated manipulated independent variable in experiment-based audit research – directional goals (Kadous et al., 2003). Although the researcher would likely identify, depending on the study inclusion criteria, many studies that investigate the effect of directional goals on auditor judgment, the vast majority of these studies include an intervention – a second manipulated independent variable – that is designed to moderate the effect of directional goals. Furthermore, most of these studies focus on the moderating effect of the intervention (i.e. the interaction) rather than on the main effect of directional goals. The concern raised here is that the overall effect size provided by a meta-analysis may not be meaningful if it is based on combining studies that use a variety of different interventions to moderate the effect of directional goals.
To address this concern, the researcher could treat studies that use, for example, a 2 × 2 experimental design (directional goals × intervention) as consisting of two subgroups (Borenstein et al., 2009) – one 1 × 2 subgroup that focuses on the effect of directional goals absent the intervention and a second 1 × 2 subgroup that focuses on the effect of directional goals controlling for the intervention. The researcher could then perform two subgroup analyses to investigate the combined effect of directional goals on auditor judgment and the combined effect of directional goals controlling for the intervention. Further subgroup analyses could be performed to evaluate the efficacy of the variety of interventions that have been investigated across studies.
To summarize, the diversity in research design across experiment-based audit studies makes the overall effect size produced by meta-analysis less meaningful but does not make the meta-analysis exercise itself meaningless. Far from it. Research design diversity offers researchers opportunities to conduct subgroup analyses that investigate how design choices moderate the combined effect. That being said, in planning the meta-analysis, the researcher must obtain a rich understanding of the audit JDM literature so that this diversity is captured, recognized and investigated in the meta-analysis.
This special issue not only adds to the number of meta-analysis studies but also highlights best practice. For example, Durand (2018) uses the Bamber et al. (1993) model of the determinants of audit report lag to identify subgroups of studies to investigate as moderators in the meta-analysis. The use of a model, rather than researcher judgment, to identify subgroups is consistent with best practice recommendations that encourage researchers to use logic models to plan and perform meta-analysis studies (Anderson et al., 2011; Pigott, 2012).
Abdolmohammadi, M. and Wright, A. (1987), “An examination of the effects of experience and task complexity on audit judgments”, The Accounting Review, Vol. 62 No. 1, pp. 1-13.
Alareeni, B. (2018), “The associations between audit firm attributes and audit quality-specific indicators: a meta-analysis”, Managerial Auditing Journal.
Anderson, L.M., Petticrew, M., Rehfuess, E., Armstrong, R., Ueffing, E., Baker, P., Francis, D. and Tugwell, P. (2011), “Using logic models to capture complexity in systematic reviews”, Research Synthesis Methods, Vol. 2 No. 1, pp. 33-42.
Bamber, E.M., Bamber, L.S. and Schoderbek, M.P. (1993), “Audit structure and other determinants of audit report lag: an empirical analysis”, Auditing: A Journal of Practice and Theory, Vol. 12 No. 1, pp. 1-23.
Borenstein, M., Hedges, L.V., Higgins, J.P. and Rothstein, H.R. (2009), Introduction to Meta-Analysis, John Wiley and Sons.
Colbert, J.L., Luehlfing, M.S. and Alderman, C.W. (1996), “Engagement risk”, The CPA Journal, Vol. 66 No. 3, pp. 54-56.
Durand, G. (2018), “The determinants of audit report lag: a meta-analysis”, Managerial Auditing Journal.
Hackenbrack, K. and Nelson, M.W. (1996), “Auditors’ incentives and their application of financial accounting standards”, The Accounting Review, Vol. 71 No. 1, pp. 43-59.
Hatfield, R.C., Agoglia, C.P. and Sanchez, M.H. (2008), “Client characteristics and the negotiation tactics of auditors: implications for financial reporting”, Journal of Accounting Research, Vol. 46 No. 5, pp. 1183-1207.
Hay, D. (2018), “The potential for greater use of meta-analysis in archival auditing research”, Managerial Auditing Journal.
Hay, D.C. and Knechel, W.R. (2017), “Meta-regression in auditing research: evaluating the evidence on the big N audit firm premium”, Auditing: A Journal of Practice and Theory, Vol. 36 No. 2, pp. 133-159.
Kadous, K., Kennedy, S.J. and Peecher, M.E. (2003), “The effect of quality assessment and directional goal commitment on auditors’ acceptance of client-preferred accounting methods”, The Accounting Review, Vol. 78 No. 3, pp. 759-778.
Khlif, H. and Chalmers, K. (2015), “A review of meta-analytic research in accounting”, Journal of Accounting Literature, Vol. 35, pp. 1-27.
Libby, R. and Kinney, W.R., Jr (2000), “Does mandated audit communication reduce opportunistic corrections to manage earnings to forecasts?”, The Accounting Review, Vol. 75 No. 4, pp. 383-404.
Nelson, M.W., Smith, S.D. and Palmrose, Z.V. (2005), “The effect of quantitative materiality approach on auditors’ adjustment decisions”, The Accounting Review, Vol. 80 No. 3, pp. 897-920.
Ng, T.B.P. and Tan, H.T. (2003), “Effects of authoritative guidance availability and audit committee effectiveness on auditors’ judgments in an auditor-client negotiation context”, The Accounting Review, Vol. 78 No. 3, pp. 801-818.
Pigott, T.D. (2012), Advances in Meta-Analysis, Springer.
Pomeroy, B. and Thornton, D.B. (2008), “Meta-analysis and the accounting literature: the case of audit committee independence and financial reporting quality”, European Accounting Review, Vol. 17 No. 2, pp. 305-330.
Staszkiewicz, P. (2018), “The application of citation count regression to identify important papers in the literature on non-audit fees”, Managerial Auditing Journal.
Trotman, K.T., Tan, H.C. and Ang, N. (2011), “Fifty‐year overview of judgment and decision‐making research in accounting”, Accounting and Finance, Vol. 51 No. 1, pp. 278-360.