Search results1 – 10 of 570
Analyses the Bureau of Municipal Research′s (BMR) role in the 1912New York City School Inquiry to show the democratic orientation of keypeople trying to transfer…
Analyses the Bureau of Municipal Research′s (BMR) role in the 1912 New York City School Inquiry to show the democratic orientation of key people trying to transfer scientific management to government. Because much modern public administration literature portrays scientific management as authoritarian, some people assume its proponents wanted to shut the populace out of public‐sector decision making by transfering power from elected officials to experts. The School Inquiry case shows how important reformers committed to scientific management sought to maximize the control that elected officials had over a key administrative function. The BMR stressed this democratic point of view until threats from its principal financial backer forced it to downplay its voice on educational issues and its innovative concept of efficient citizenship.
Simulation-based methods and simulation-assisted estimators have greatly increased the reach of empirical applications in econometrics. The received literature includes a thick layer of theoretical studies, including landmark works by Gourieroux and Monfort (1996), McFadden and Ruud (1994), and Train (2003), and hundreds of applications. An early and still influential application of the method is Berry, Levinsohn, and Pakes's (1995) (BLP) application to the U.S. automobile market in which a market equilibrium model is cleared of latent heterogeneity by integrating the heterogeneity out of the moments in a GMM setting. BLP's methodology is a baseline technique for studying market equilibrium in empirical industrial organization. Contemporary applications involving multilayered models of heterogeneity in individual behavior such as that in Riphahn, Wambach, and Million's (2003) study of moral hazard in health insurance are also common. Computation of multivariate probabilities by using simulation methods is now a standard technique in estimating discrete choice models. The mixed logit model for modeling preferences (McFadden & Train, 2000) is now the leading edge of research in multinomial choice modeling. Finally, perhaps the most prominent application in the entire arena of simulation-based estimation is the current generation of Bayesian econometrics based on Markov Chain Monte Carlo (MCMC) methods. In this area, heretofore intractable estimators of posterior means are routinely estimated with the assistance of simulation and the Gibbs sampler.
In this chapter, we utilize the residual concept of productivity measures defined in the context of normal-gamma stochastic frontier production model with heterogeneity to…
In this chapter, we utilize the residual concept of productivity measures defined in the context of normal-gamma stochastic frontier production model with heterogeneity to differentiate productivity and inefficiency measures. In particular, three alternative two-way random effects panel estimators of normal-gamma stochastic frontier model are proposed using simulated maximum likelihood estimation techniques. For the three alternative panel estimators, we use a generalized least squares procedure involving the estimation of variance components in the first stage and estimated variance–covariance matrix to transform the data. Empirical estimates indicate difference in the parameter coefficients of gamma distribution, production function, and heterogeneity function variables between pooled and the two alternative panel estimators. The difference between pooled and panel model suggests the need to account for spatial, temporal, and within residual variations as in Swamy–Arora estimator, and within residual variation in Amemiya estimator with panel framework. Finally, results from this study indicate that short- and long-run variations in financial exposure (solvency, liquidity, and efficiency) play an important role in explaining the variance of inefficiency and productivity.
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals…
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals adopting simplifying strategies when faced with ever increasing amounts of information to process, and especially for decisions where the chosen outcome will have a very marginal impact on their well-being. The transactions costs associated with processing all new information often exceed the benefits from such a comprehensive review. The accumulating life experiences of individuals are also often brought to bear as reference points to assist in selectively evaluating information placed in front of them. These features of human processing and cognition are not new to the broad literature on judgment and decision-making, where heuristics are offered up as deliberative analytic procedures intentionally designed to simplify choice. What is surprising is the limited recognition of heuristics that individuals use to process the attributes in stated choice experiments. In this paper we present a case for a utility-based framework within which some appealing processing strategies are embedded (without the aid of supplementary self-stated intentions), as well as models conditioned on self-stated intentions represented as single items of process advice, and illustrate the implications on willingness to pay for travel time savings of embedding each heuristic in the choice process. Given the controversy surrounding the reliability of self-stated intentions, we introduce a framework in which mixtures of process advice embedded within a belief function might be used in future empirical studies to condition choice, as a way of increasingly judging the strength of the evidence.
In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane…
In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane (GHK) simulator is the most widely used technique for this type of problem. This chapter suggests an algorithm that is based on GHK but uses an adaptive version of sparse-grids integration (SGI) instead of simulation. It is adaptive in the sense that it uses an automated change-of-variables to make the integration problem numerically better behaved along the lines of efficient importance sampling (EIS) and adaptive univariate quadrature. The resulting integral is approximated using SGI that generalizes Gaussian quadrature in a way such that the computational costs do not grow exponentially with the number of dimensions. Monte Carlo experiments show an impressive performance compared to the original GHK algorithm, especially in difficult cases such as models with high intertemporal correlations.
In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.