Regression discontinuity designs have become popular in empirical studies due to their attractive properties for estimating causal effects under transparent assumptions. Nonetheless, most popular procedures assume i.i.d. data, which is unreasonable in many common applications. To fill this gap, we derive the properties of traditional local polynomial estimators in a fixed- setting that allows for cluster dependence in the error term. Simulation results demonstrate that accounting for clustering in the data while selecting bandwidths may lead to lower MSE while maintaining proper coverage. We then apply our cluster-robust procedure to an application examining the impact of Low-Income Housing Tax Credits on neighborhood characteristics and low-income housing supply.
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications of the MCS prediction: that the extreme (high and low) conditiona l quantiles of the dependent variable increase monotonically with the explanatory variable. The main contribution of the article is to derive a likelihood-ratio test, which to the best of our knowledge is the first econometric test of MCS proposed in the literature. The test is an asymptotic “chi-bar squared” test for order restrictions on intermediate conditional quantiles. The key features of our approach are: (1) we do not need to estimate the underlying nonparametric model relating the dependent and explanatory variables to the latent disturbances; (2) we make few assumptions on the cardinality, location, or probabilities over equilibria. In particular, one can implement our test without assuming an equilibrium selection rule.
Improving healthcare services by developing assistive technologies includes both the health aid devices and the analysis of the data collected by them. The acquired data…
Improving healthcare services by developing assistive technologies includes both the health aid devices and the analysis of the data collected by them. The acquired data modeled as a knowledge base give more insight into each patient’s health status and needs. Therefore, the ultimate goal of a health-care system is obtaining recommendations provided by an assistive decision support system using such knowledge base, benefiting the patients, the physicians and the healthcare industry. This paper aims to define the knowledge flow for a medical assistive decision support system by structuring raw medical data and leveraging the knowledge contained in the data proposing solutions for efficient data search, medical investigation or diagnosis and medication prediction and relationship identification.
The solution this paper proposes for implementing a medical assistive decision support system can analyze any type of unstructured medical documents which are processed by applying Natural Language Processing (NLP) tasks followed by semantic analysis, leading to the medical concept identification, thus imposing a structure on the input documents. The structured information is filtered and classified such that custom decisions regarding patients’ health status can be made. The current research focuses on identifying the relationships between medical concepts as defined by the REMed (Relation Extraction from Medical documents) solution that aims at finding the patterns that lead to the classification of concept pairs into concept-to-concept relations.
This paper proposed the REMed solution expressed as a multi-class classification problem tackled using the support vector machine classifier. Experimentally, this paper determined the most appropriate setup for the multi-class classification problem which is a combination of lexical, context, syntactic and grammatical features, as each feature category is good at representing particular relations, but not all. The best results we obtained are expressed as F1-measure of 74.9 per cent which is 1.4 per cent better than the results reported by similar systems.
The difficulty to discriminate between TrIP and TrAP relations revolves around the hierarchical relationship between the two classes as TrIP is a particular type (an instance) of TrAP. The intuition behind this behavior was that the classifier cannot discern the correct relations because of the bias toward the majority classes. The analysis was conducted by using only sentences from electronic health record that contain at least two medical concepts. This limitation was introduced by the availability of the annotated data with reported results, as relations were defined at sentence level.
The originality of the proposed solution lies in the methodology to extract valuable information from the medical records via semantic searches; concept-to-concept relation identification; and recommendations for diagnosis, treatment and further investigations. The REMed solution introduces a learning-based approach for the automatic discovery of relations between medical concepts. We propose an original list of features: lexical – 3, context – 6, grammatical – 4 and syntactic – 4. The similarity feature introduced in this paper has a significant influence on the classification, and, to the best of the authors’ knowledge, it has not been used as feature in similar solutions.
Identification in a regression discontinuity (RD) design hinges on the discontinuity in the probability of treatment when a covariate (assignment variable) exceeds a known…
Identification in a regression discontinuity (RD) design hinges on the discontinuity in the probability of treatment when a covariate (assignment variable) exceeds a known threshold. If the assignment variable is measured with error, however, the discontinuity in the relationship between the probability of treatment and the observed mismeasured assignment variable may disappear. Therefore, the presence of measurement error in the assignment variable poses a challenge to treatment effect identification. This chapter provides sufficient conditions to identify the RD treatment effect using the mismeasured assignment variable, the treatment status and the outcome variable. We prove identification separately for discrete and continuous assignment variables and study the properties of various estimation procedures. We illustrate the proposed methods in an empirical application, where we estimate Medicaid takeup and its crowdout effect on private health insurance coverage.
Many dynamic problems in economics are characterized by large state spaces which make both computing and estimating the model infeasible. We introduce a method for…
Many dynamic problems in economics are characterized by large state spaces which make both computing and estimating the model infeasible. We introduce a method for approximating the value function of high-dimensional dynamic models based on sieves and establish results for the (a) consistency, (b) rates of convergence, and (c) bounds on the error of approximation. We embed this method for approximating the solution to the dynamic problem within an estimation routine and prove that it provides consistent estimates of the modelik’s parameters. We provide Monte Carlo evidence that our method can successfully be used to approximate models that would otherwise be infeasible to compute, suggesting that these techniques may substantially broaden the class of models that can be solved and estimated.
There can be activities that cannot reduce times by conventional single minute exchange of die (SMED) tools. In this case more advanced tools are needed. The purpose of this paper is to integrate the fuzzy Taguchi method into the SMED method in order to improve the setup time. The reason for using fuzzy logic is the subjective evaluation of factor’s levels assessment by experts. Subjective assessment contains a certain degree of uncertainty and is vagueness. The fuzzy Taguchi method provides to determining optimal setup time parameters in an activity of SMED. So it is possible to reduce time more than the conventional SMED method.
In this study, the SMED method and the fuzzy Taguchi method are used.
In this study, it has been shown that the setup time is reduced (from 196 to 75 min) and the optimum value can be given at the intermediate value by the fuzzy Taguchi method.
In this limited literature research, the authors have not found a study using the fuzzy Taguchi method in the SMED method.
Stochastic volatility models are of great importance in the field of mathematical finance, especially for accurately explaining the dynamics of financial derivatives. A…
Stochastic volatility models are of great importance in the field of mathematical finance, especially for accurately explaining the dynamics of financial derivatives. A quantile-based estimator for the location parameter of a stochastic volatility model is proposed by solving an optimization problem. In this chapter, the asymptotic distribution of the estimator is derived without assuming that the density function of the noise is positive around the corresponding population quantile. We also discuss a Bayesian approach to the quantile estimation problem and establish a result regarding the nature of the posterior distribution.
The topic of volatility measurement and estimation is central to financial and more generally time-series econometrics. In this chapter, we begin by surveying models of…
The topic of volatility measurement and estimation is central to financial and more generally time-series econometrics. In this chapter, we begin by surveying models of volatility, both discrete and continuous, and then we summarize some selected empirical findings from the literature. In particular, in the first sections of this chapter, we discuss important developments in volatility models, with focus on time-varying and stochastic volatility as well as nonparametric volatility estimation. The models discussed share the common feature that volatilities are unobserved and belong to the class of missing variables. We then provide empirical evidence on “small” and “large” jumps from the perspective of their contribution to overall realized variation, using high-frequency price return data on 25 stocks in the DOW 30. Our “small” and “large” jump variations are constructed at three truncation levels, using extant methodology of Barndorff-Nielsen and Shephard (2006), Andersen, Bollerslev, and Diebold (2007), and Aït-Sahalia and Jacod (2009a, 2009b, 2009c). Evidence of jumps is found in around 22.8% of the days during the 1993–2000 period, much higher than the corresponding figure of 9.4% during the 2001–2008 period. Although the overall role of jumps is lessening, the role of large jumps has not decreased, and indeed, the relative role of large jumps, as a proportion of overall jumps, has actually increased in the 2000s.
We study research designs where a binary treatment changes discontinuously at the border between administrative units such as states, counties, or municipalities, creating…
We study research designs where a binary treatment changes discontinuously at the border between administrative units such as states, counties, or municipalities, creating a treated and a control area. This type of geographically discontinuous treatment assignment can be analyzed in a standard regression discontinuity (RD) framework if the exact geographic location of each unit in the dataset is known. Such data, however, is often unavailable due to privacy considerations or measurement limitations. In the absence of geo-referenced individual-level data, two scenarios can arise depending on what kind of geographic information is available. If researchers have information about each observation’s location within aggregate but small geographic units, a modified RD framework can be applied, where the running variable is treated as discrete instead of continuous. If researchers lack this type of information and instead only have access to the location of units within coarse aggregate geographic units that are too large to be considered in an RD framework, the available coarse geographic information can be used to create a band or buffer around the border, only including in the analysis observations that fall within this band. We characterize each scenario, and also discuss several methodological challenges that are common to all research designs based on geographically discontinuous treatment assignments. We illustrate these issues with an original geographic application that studies the effect of introducing copayments for the use of the Children’s Health Insurance Program in the United States, focusing on the border between Illinois and Wisconsin.
Do social movement organizations increase the supply of a public good? We address this question by investigating the role of generalist social movement organizations and…
Do social movement organizations increase the supply of a public good? We address this question by investigating the role of generalist social movement organizations and technology-focused organizations for the development of the electric vehicle (EV) charging infrastructure in California from 1995 until 2012. We find that increases in the membership of Electric Auto Association (EAA) chapters in the cities of California enhanced the number of EV charging stations set up in each city. Our analyses also show that the organizational diversity of the environmental movement spurred the growth of EAA membership but did not directly increase the establishment of charging stations.