When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator can severely over-reject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not too small, they can either over-reject or under-reject seriously when it is. In a previous paper, we showed that procedures based on randomization inference (RI) can work well in such cases. However, RI can be impractical when the number of possible randomizations is small. We propose a bootstrap-based alternative to RI, which mitigates the discrete nature of RI p values in the few-clusters case. We also compare it to two other procedures. None of them works perfectly when the number of clusters is very small, but they can work surprisingly well.
The WBRI procedure discussed in this paper was originally proposed in a working paper circulated as “Randomization Inference for Difference-in-Differences with Few Treated Clusters.” However, a revised version of that paper no longer discusses the WBRI procedure. We are grateful to Jeffrey Wooldridge, seminar participants at the Complex Survey Data conference on October 19–20, 2017 and at New York Camp Econometrics XIII on April 6–8, 2018, and two anonymous referees for helpful comments. This research was supported, in part, by grants from the Social Sciences and Humanities Research Council of Canada. Joshua Roxborough and Oladapo Odumosu provided excellent research assistance.
MacKinnon, J.G. and Webb, M.D. (2019), "Wild Bootstrap Randomization Inference for Few Treated Clusters", The Econometrics of Complex Survey Data (Advances in Econometrics, Vol. 39), Emerald Publishing Limited, Leeds, pp. 61-85. https://doi.org/10.1108/S0731-905320190000039003
Emerald Publishing Limited
Copyright © 2019 Emerald Publishing Limited