Search results

1 – 10 of over 7000
Article
Publication date: 1 March 2006

L.L. Ho and A.F. Silva

To present the bootstrap procedure to correct biases in maximum likelihood estimator of mean time to failure (MTTF) and percentiles in a Weibull regression model.

Abstract

Purpose

To present the bootstrap procedure to correct biases in maximum likelihood estimator of mean time to failure (MTTF) and percentiles in a Weibull regression model.

Design/methodology/approach

A reliability model is described by a Weibull regression model with parameters being estimated by maximum likelihood method and they will be used estimate other quantities of interest as MTTF or percentiles. When a small sample is employed it is known that the estimates of these quantities are biased. A simulation study varying sample size, censored mechanisms, allocation mechanisms and levels of censored data are designed to quantify these biases.

Findings

The bootstrap procedure corrects the biased maximum likelihood estimates of MTTF and percentiles.

Practical implications

A minor sample may be required if the bootstrap procedure is required to produce estimator of the quantities as MTTF and percentiles.

Originality/value

The employment of bootstrap procedure to quantify the biases since analytical expression of the biases are very difficult to calculate. And the minor samples are needed to obtain unbiased estimates for bootstrap corrected estimator.

Details

International Journal of Quality & Reliability Management, vol. 23 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 23 June 2016

Matthew Harding, Jerry Hausman and Christopher J. Palmer

This paper considers the finite-sample distribution of the 2SLS estimator and derives bounds on its exact bias in the presence of weak and/or many instruments. We then contrast…

Abstract

This paper considers the finite-sample distribution of the 2SLS estimator and derives bounds on its exact bias in the presence of weak and/or many instruments. We then contrast the behavior of the exact bias expressions and the asymptotic expansions currently popular in the literature, including a consideration of the no-moment problem exhibited by many Nagar-type estimators. After deriving a finite-sample unbiased k-class estimator, we introduce a double-k-class estimator based on Nagar (1962) that dominates k-class estimators (including 2SLS), especially in the cases of weak and/or many instruments. We demonstrate these properties in Monte Carlo simulations showing that our preferred estimators outperform Fuller (1977) estimators in terms of mean bias and MSE.

Details

Essays in Honor of Aman Ullah
Type: Book
ISBN: 978-1-78560-786-8

Keywords

Article
Publication date: 2 March 2015

Michael Bleaney and Zhiyong Li

This paper aims to investigate the performance of estimators of the bid-ask spread in a wide range of circumstances and sampling frequencies. The bid-ask spread is important for…

Abstract

Purpose

This paper aims to investigate the performance of estimators of the bid-ask spread in a wide range of circumstances and sampling frequencies. The bid-ask spread is important for many reasons. Because spread data are not always available, many methods have been suggested for estimating the spread. Existing papers focus on the performance of the estimators either under ideal conditions or in real data. The gap between ideal conditions and the properties of real data are usually ignored. The consistency of the estimates across various sampling frequencies is also ignored.

Design/methodology/approach

The estimators and the possible errors are analysed theoretically. Then we perform simulation experiments, reporting the bias, standard deviation and root mean square estimation error of each estimator. More specifically, we assess the effects of the following factors on the performance of the estimators: the magnitude of the spread relative to returns volatility, randomly varying of spreads, the autocorrelation of mid-price returns and mid-price changes caused by trade directions and feedback trading.

Findings

The best estimates come from using the highest frequency of data available. The relative performance of estimators can vary quite markedly with the sampling frequency. In small samples, the standard deviation can be more important to the estimation error than bias; in large samples, the opposite tends to be true.

Originality/value

There is a conspicuous lack of simulation evidence on the comparative performance of different estimators of the spread under the less than ideal conditions that are typical of real-world data. This paper aims to fill this gap.

Details

Studies in Economics and Finance, vol. 32 no. 1
Type: Research Article
ISSN: 1086-7376

Keywords

Book part
Publication date: 18 January 2022

Artūras Juodis

This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses…

Abstract

This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses backward means of regressors to approximate individual specific fixed effects (FE). The author analyzes sufficient conditions for this estimator to be asymptotically efficient, and argue that, in comparison with the FE estimator, the use of backward means leads to a non-trivial bias-variance tradeoff. The author complements theoretical analysis with an extensive Monte Carlo study, where the author finds that some of the currently available results for restricted AR(1) model cannot be easily generalized, and should be extrapolated with caution.

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Keywords

Article
Publication date: 5 May 2015

Priscillia Hunt and Jeremy N.V Miles

Studies in criminal psychology are inevitably undertaken in a context of uncertainty. One class of methods addressing such uncertainties is Monte Carlo (MC) simulation. The…

Abstract

Purpose

Studies in criminal psychology are inevitably undertaken in a context of uncertainty. One class of methods addressing such uncertainties is Monte Carlo (MC) simulation. The purpose of this paper is to provide an introduction to MC simulation for representing uncertainty and focusses on likely uses in studies of criminology and psychology. In addition to describing the method and providing a step-by-step guide to implementing a MC simulation, this paper provides examples using the Fragile Families and Child Wellbeing Survey data. Results show MC simulations can be a useful technique to test biased estimators and to evaluate the effect of bias on power for statistical tests.

Design/methodology/approach

After describing MC simulation methods in detail, this paper provides a step-by-step guide to conducting a simulation. Then, a series of examples are provided. First, the authors present a brief example of how to generate data using MC simulation and the implications of alternative probability distribution assumptions. The second example uses actual data to evaluate the impact that omitted variable bias can have on least squares estimators. A third example evaluates the impact this form of heteroskedasticity can have on the power of statistical tests.

Findings

This study shows MC simulated variable means are very similar to the actual data, but the standard deviations are considerably less in MC simulation-generated data. Using actual data on criminal convictions and income of fathers, the authors demonstrate the impact of omitted variable bias on the standard errors of the least squares estimator. Lastly, the authors show the p-values are systematically larger and the rejection frequencies correspondingly smaller in heteroskedastic error models compared to a model with homoskedastic errors.

Originality/value

The aim of this paper is to provide a better understanding of what MC simulation methods are and what can be achieved with them. A key value of this paper is that the authors focus on understanding the concepts of MC simulation for researchers of statistics and psychology in particular. Furthermore, the authors provide a step-by-step description of the MC simulation approach and provide examples using real survey data on criminal convictions and economic characteristics of fathers in large US cities.

Details

Journal of Criminal Psychology, vol. 5 no. 2
Type: Research Article
ISSN: 2009-3829

Keywords

Book part
Publication date: 15 April 2020

Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central…

Abstract

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.

Book part
Publication date: 19 December 2012

George G. Judge and Ron C. Mittelhammer

In the context of competing theoretical economic–econometric models and corresponding estimators, we demonstrate a semiparametric combining estimator that, under quadratic loss…

Abstract

In the context of competing theoretical economic–econometric models and corresponding estimators, we demonstrate a semiparametric combining estimator that, under quadratic loss, has superior risk performance. The method eliminates the need for pretesting to decide between members of the relevant family of econometric models and demonstrates, under quadratic loss, the nonoptimality of the conventional pretest estimator. First-order asymptotic properties of the combined estimator are demonstrated. A sampling study is used to illustrate finite sample performance over a range of econometric model sampling designs that includes performance relative to a Hausman-type model selection pretest estimator. An important empirical problem from the causal effects literature is analyzed to indicate the applicability and econometric implications of the methodology. This combining estimation and inference framework can be extended to a range of models and corresponding estimators. The combining estimator is novel in that it provides directly minimum quadratic loss solutions.

Book part
Publication date: 12 December 2003

Douglas Miller, James Eales and Paul Preckel

We propose a quasi–maximum likelihood estimator for the location parameters of a linear regression model with bounded and symmetrically distributed errors. The error outcomes are…

Abstract

We propose a quasi–maximum likelihood estimator for the location parameters of a linear regression model with bounded and symmetrically distributed errors. The error outcomes are restated as the convex combination of the bounds, and we use the method of maximum entropy to derive the quasi–log likelihood function. Under the stated model assumptions, we show that the proposed estimator is unbiased, consistent, and asymptotically normal. We then conduct a series of Monte Carlo exercises designed to illustrate the sampling properties of the quasi–maximum likelihood estimator relative to the least squares estimator. Although the least squares estimator has smaller quadratic risk under normal and skewed error processes, the proposed QML estimator dominates least squares for the bounded and symmetric error distribution considered in this paper.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Book part
Publication date: 23 November 2011

Daniel L. Millimet

Researchers in economics and other disciplines are often interested in the causal effect of a binary treatment on outcomes. Econometric methods used to estimate such effects are…

Abstract

Researchers in economics and other disciplines are often interested in the causal effect of a binary treatment on outcomes. Econometric methods used to estimate such effects are divided into one of two strands depending on whether they require unconfoundedness (i.e., independence of potential outcomes and treatment assignment conditional on a set of observable covariates). When this assumption holds, researchers now have a wide array of estimation techniques from which to choose. However, very little is known about their performance – both in absolute and relative terms – when measurement error is present. In this study, the performance of several estimators that require unconfoundedness, as well as some that do not, are evaluated in a Monte Carlo study. In all cases, the data-generating process is such that unconfoundedness holds with the ‘real’ data. However, measurement error is then introduced. Specifically, three types of measurement error are considered: (i) errors in treatment assignment, (ii) errors in the outcome, and (iii) errors in the vector of covariates. Recommendations for researchers are provided.

Details

Missing Data Methods: Cross-sectional Methods and Applications
Type: Book
ISBN: 978-1-78052-525-9

Keywords

Article
Publication date: 2 July 2020

Ingo Hoffmann and Christoph J. Börner

This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with…

Abstract

Purpose

This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with its own distribution. This distribution is first determined and then it is shown how the accuracy of the quantile estimation can be assessed in practice.

Design/methodology/approach

The paper considers the situation that the parent distribution of the data is unknown, the tail is modeled with the generalized pareto distribution and the quantile is finally estimated using the fitted tail model. Based on well-known theoretical preliminary studies, the finite sample distribution of the quantile estimator is determined and the accuracy of the estimator is quantified.

Findings

In general, the algebraic representation of the finite sample distribution of the quantile estimator was found. With the distribution, all statistical quantities can be determined. In particular, the expected value, the variance and the bias of the quantile estimator are calculated to evaluate the accuracy of the estimation process. Scaling laws could be derived and it turns out that with a fat tail and few data, the bias and the variance increase massively.

Research limitations/implications

Currently, the research is limited to the form of the tail, which is interesting for the financial sector. Future research might consider problems where the tail has a finite support or the tail is over-fat.

Practical implications

The ability to calculate error bands and the bias for the quantile estimator is equally important for financial institutions, as well as regulators and auditors.

Originality/value

Understanding the quantile estimator as a random variable and analyzing and evaluating it based on its distribution gives researchers, regulators, auditors and practitioners new opportunities to assess risk.

Details

The Journal of Risk Finance, vol. 21 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

1 – 10 of over 7000