Search results1 – 10 of over 61000
We study research designs where a binary treatment changes discontinuously at the border between administrative units such as states, counties, or municipalities, creating…
We study research designs where a binary treatment changes discontinuously at the border between administrative units such as states, counties, or municipalities, creating a treated and a control area. This type of geographically discontinuous treatment assignment can be analyzed in a standard regression discontinuity (RD) framework if the exact geographic location of each unit in the dataset is known. Such data, however, is often unavailable due to privacy considerations or measurement limitations. In the absence of geo-referenced individual-level data, two scenarios can arise depending on what kind of geographic information is available. If researchers have information about each observation’s location within aggregate but small geographic units, a modified RD framework can be applied, where the running variable is treated as discrete instead of continuous. If researchers lack this type of information and instead only have access to the location of units within coarse aggregate geographic units that are too large to be considered in an RD framework, the available coarse geographic information can be used to create a band or buffer around the border, only including in the analysis observations that fall within this band. We characterize each scenario, and also discuss several methodological challenges that are common to all research designs based on geographically discontinuous treatment assignments. We illustrate these issues with an original geographic application that studies the effect of introducing copayments for the use of the Children’s Health Insurance Program in the United States, focusing on the border between Illinois and Wisconsin.
Regression discontinuity (RD) models are commonly used to nonparametrically identify and estimate a local average treatment effect. Dong and Lewbel (2015) show how a…
Regression discontinuity (RD) models are commonly used to nonparametrically identify and estimate a local average treatment effect. Dong and Lewbel (2015) show how a derivative of this effect, called treatment effect derivative (TED) can be estimated. We argue here that TED should be employed in most RD applications, as a way to assess the stability and hence external validity of RD estimates. Closely related to TED, we define the complier probability derivative (CPD). Just as TED measures stability of the treatment effect, the CPD measures stability of the complier population in fuzzy designs. TED and CPD are numerically trivial to estimate. We provide relevant Stata code, and apply it to some real datasets.
Several approaches have been proposed to evaluate treatment effect, relying on matching methods propensity score, quantile regression, influence function, bootstrap and…
Several approaches have been proposed to evaluate treatment effect, relying on matching methods propensity score, quantile regression, influence function, bootstrap and various combinations of the above. This paper considers two of these approaches to define the quantile double robust (DR) estimator: the inverse propensity score weights, to compare potential output of treated and untreated groups; the Machado and Mata quantile decomposition approach to compute the unconditional quantiles within each group – treated and control. Two Monte Carlo studies and an empirical application for the Italian job labor market conclude the analysis. The paper aims to discuss these issue.
The DR estimator is extended to analyze the tails of the distribution comparing treated and untreated groups, thus defining the quantile based DR estimator. It allows us to measure the treatment effect along the entire outcome distribution. Such a detailed analysis uncovers the presence of heterogeneous impacts of the treatment along the outcome distribution. The computation of the treatment effect at the quantiles, points out variations in the impact of treatment along the outcome distributions. Indeed it is often the case that the impact in the tails sizably differs from the average treatment effect.
Two Monte Carlo studies show that away from average, the quantile DR estimator can be profitably implemented. In the real data example, the nationwide results are compared with the analysis at a regional level. While at the median and at the upper quartile the nationwide impact is similar to the regional impacts, at the first quartile – the lower incomes – the nationwide effect is close to the North-Center impact but undervalues the impact in the South.
The computation of the treatment effect at various quantiles allows to point out discrepancies between treatment and control along the entire outcome distributions. The discrepancy in the tails may differ from the divergence between the average values. Treatment can be more effective at the lower/higher quantiles. The simulations show the performance at the quartiles of quantile DR estimator. In a wage equation comparing long and short term contracts, this estimator shows the presence of an heterogeneous impact of short term contracts. Their impact changes depending on the income level, the outcome quantiles, and on the geographical region.
Job Corps is the United State’s largest and most comprehensive training program for disadvantaged youth aged 16–24 years old. A randomized social experiment concluded…
Job Corps is the United State’s largest and most comprehensive training program for disadvantaged youth aged 16–24 years old. A randomized social experiment concluded that, on average, individuals benefited from the program in the form of higher weekly earnings and employment prospects. At the same time, “young adults” (ages 20–24) realized much higher impacts relative to “adolescents” (ages 16–19). Employing recent nonparametric bounds for causal mediation, we investigate whether these two groups’ disparate effects correspond to them benefiting differentially from distinct aspects of Job Corps, with a particular focus on the attainment of a degree (GED, high school, or vocational). We find that, for young adults, the part of the total effect of Job Corps on earnings (employment) that is due to attaining a degree within the program is at most 41% (32%) of the total effect, whereas for adolescents that part can account for up to 87% (100%) of the total effect. We also find evidence that the magnitude of the part of the effect of Job Corps on the outcomes that works through components of Job Corps other than degree attainment (e.g., social skills, job placement, residential services) is likely higher for young adults than for adolescents. That those other components likely play a more important role for young adults has policy implications for more effectively servicing participants. More generally, our results illustrate how researchers can learn about particular mechanisms of an intervention.
Without controlling for selection bias and the potential endogeneity of the treatment by using proper methods, the estimation of treatment effect could lead to biased or…
Without controlling for selection bias and the potential endogeneity of the treatment by using proper methods, the estimation of treatment effect could lead to biased or incorrect conclusions. However, these issues are not addressed adequately and properly in higher education research. This study reviews the essence of self-selection bias, treatment assignment endogeneity, and treatment effect estimation. We introduce three treatment effect estimators – propensity score matching analysis, doubly robust estimation (augmented inverse probability weighted approach), and endogenous treatment estimator (control-function approach) – and examine literature that applies these methods to research in higher education. We then use the three methods in a case study that estimates the effects of transfer student pre-enrollment debt on persistence and first year grades. The final discussion provides guidelines and recommendations for causal inference research studies that use such quasi-experimental methods.
This paper replicates four highly cited, classic lab experimental studies in the provision of public goods. The studies consider the impact of marginal per capita return…
This paper replicates four highly cited, classic lab experimental studies in the provision of public goods. The studies consider the impact of marginal per capita return and group size; framing (as donating to or taking from the public good); the role of confusion in the public goods game; and the effectiveness of peer punishment. Considerable attention has focused recently on the problem of publication bias, selective reporting, and the importance of research transparency in social sciences. Replication is at the core of any scientific process and replication studies offer an opportunity to reevaluate, confirm or falsify previous findings. This paper illustrates the value of replication in experimental economics. The experiments were conducted as class projects for a PhD course in experimental economics, and follow exact instructions from the original studies and current standard protocols for lab experiments in economics. Most results show the same pattern as the original studies, but in all cases with smaller treatment effects and lower statistical significance, sometimes falling below accepted levels of significance. In addition, we document a “Texas effect,” with subjects consistently exhibiting higher levels of contributions and lower free-riding than in the original studies. This research offers new evidence on the attenuation effect in replications, well documented in other disciplines and from which experimental economics is not immune. It also opens the discussion over the influence of unobserved heterogeneity in institutional environments and subject pools that can affect lab results.
In this paper, we study partial identification of the distribution of treatment effects of a binary treatment for ideal randomized experiments, ideal randomized…
In this paper, we study partial identification of the distribution of treatment effects of a binary treatment for ideal randomized experiments, ideal randomized experiments with a known value of a dependence measure, and for data satisfying the selection-on-observables assumption, respectively. For ideal randomized experiments, (i) we propose nonparametric estimators of the sharp bounds on the distribution of treatment effects and construct asymptotically valid confidence sets for the distribution of treatment effects; (ii) we propose bias-corrected estimators of the sharp bounds on the distribution of treatment effects; and (iii) we investigate finite sample performances of the proposed confidence sets and the bias-corrected estimators via simulation.
The purpose of this paper is to estimate the impact of one productive development program (PROPYME) in a developing nation like Costa Rica. This program seeks to increase…
The purpose of this paper is to estimate the impact of one productive development program (PROPYME) in a developing nation like Costa Rica. This program seeks to increase the capacity of small and medium-sized firms (SMEs) to innovate.
Impacts have been estimated assuming that beneficiary firms are trying to maximize their profits and that PROPYME aims to increase these firms productivity. The impacts were measured in terms of three result variables real average wages employment demand and the probability of exporting. A combination of fixed effects and propensity score matching techniques was used in estimations to correct for any selection bias. The authors worked with panel data companies treated and untreated for the period 2001-2011.
PROPYME’s beneficiaries performed better than other firms in terms of labor demand and their probability of exporting. In addition, the dose and the duration of the effects of the treatment (timing effects) are important.
The authors study the impact in ways that go beyond the average treatment effects on the treated (ATT) usually estimated in the existing literature. Specifically, the research focusses on the identification of the timing or dynamic effects (i.e. how long should we wait to see results?) and treatment intensity (dosage effects).
Se estima el impacto de un programa de desarrollo productivo (Propyme) en un país en vías de desarrollo como Costa Rica. El Propyme busca incrementar la capacidad innovadora de las pequeñas y medidas empresas (pymes) costarricenses.
el impacto se ha estimado y evaluado asumiendo que las pymes beneficiaras buscan maximizar sus beneficios y que Propyme se enfoca en incrementar la productividad de esas empresas. El impacto se valoró en función de tres variables: salarios reales medios, empleo demandado y la probabilidad de exportar. Se utilizó una combinación de técnicas de efectos fijos y emparejamiento en las estimaciones con el fin de prevenir sesgos de selección. Se trabajó con un panel de datos, incluyendo empresas tratadas (beneficiarias de Propyme) así como no tratadas para el periodo 2001-2011.
los beneficiarios de Propyme tuvieron mejor desempeño que las restantes empresas en términos de empleo demandado y su posibilidad de exportar. Adicionalmente los efectos dinámicos (dosis y duración) de los tratamientos son importantes.
Originalidad y valor
este artículo evalúa el impacto de una forma que va más allá de lo usual en la literatura por medio de los efectos promedios de los tratamientos sobre los beneficiarios. Esto por cuanto se enfoca en efectos dinámicos como la duración así como la intensidad.
Lechner and Miquel (2001) approached the causal analysis of sequences of interventions from a potential outcome perspective based on selection-on-observables-type…
Lechner and Miquel (2001) approached the causal analysis of sequences of interventions from a potential outcome perspective based on selection-on-observables-type assumptions (sequential conditional independence assumptions). Lechner (2004) proposed matching estimators for this framework. However, many practical issues that might have substantial consequences for the interpretation of the results have not been thoroughly investigated so far. This chapter discusses some of these practical issues. The discussion is related to estimates based on an artificial data set for which the true values of the parameters are known and that shares many features of data that could be used for an empirical dynamic matching analysis.
We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent…
We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent advances in simulation methods, namely data augmentation, the Gibbs sampler and the Metropolis-Hastings algorithm can be used to fit this model efficiently, and also introduce a reparameterization to help accelerate the convergence of our posterior simulator. Conventional “treatment effects” such as the Average Treatment Effect (ATE), the effect of treatment on the treated (TT) and the Local Average Treatment Effect (LATE) are adapted for this specific model, and Bayesian strategies for calculating these treatment effects are introduced. Finally, we review how one can potentially learn (or at least bound) the non-identified cross-regime correlation parameter and use this learning to calculate (or bound) parameters of interest beyond mean treatment effects.