Search results

1 – 10 of 118
To view the access options for this content please click here
Book part
Publication date: 13 May 2017

Otávio Bartalotti, Gray Calhoun and Yang He

This chapter develops a novel bootstrap procedure to obtain robust bias-corrected confidence intervals in regression discontinuity (RD) designs. The procedure uses a wild…

Abstract

This chapter develops a novel bootstrap procedure to obtain robust bias-corrected confidence intervals in regression discontinuity (RD) designs. The procedure uses a wild bootstrap from a second-order local polynomial to estimate the bias of the local linear RD estimator; the bias is then subtracted from the original estimator. The bias-corrected estimator is then bootstrapped itself to generate valid confidence intervals (CIs). The CIs generated by this procedure are valid under conditions similar to Calonico, Cattaneo, and Titiunik’s (2014) analytical correction – that is, when the bias of the naive RD estimator would otherwise prevent valid inference. This chapter also provides simulation evidence that our method is as accurate as the analytical corrections and we demonstrate its use through a reanalysis of Ludwig and Miller’s (2007) Head Start dataset.

Details

Regression Discontinuity Designs
Type: Book
ISBN: 978-1-78714-390-6

Keywords

To view the access options for this content please click here
Article
Publication date: 31 December 2018

Antonio Gil Ropero, Ignacio Turias Dominguez and Maria del Mar Cerbán Jiménez

The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.

Abstract

Purpose

The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.

Design/methodology/approach

To achieve the above-mentioned objective, one possible method is to calculate the data envelopment analysis (DEA) efficiency, and the scale efficiency (SE) of targets, and in order to consider the variability across different samples, a bootstrap scheme has been applied.

Findings

The results showed that the DEA bootstrap-based approach can not only select a suitable unit which accords with a port’s actual input capabilities, but also provides a more accurate result. The bootstrapped results indicate that all ports do not need to develop future investments to expand port infrastructure.

Practical implications

The proposed DEA bootstrap-based approach provides useful implications in the robust measurement of port efficiency considering different samples. The study proves the usefulness of this approach as a decision-making tool in port efficiency.

Originality/value

This study is one of the first studies to apply bootstrap to measure port efficiency under the background of the Spain and Portugal case. In the first stage, two models of DEA have been used to obtain the pure technical, and the technical and SE, and both the input-oriented options: constant return scale and variable return scale. In the second stage, the bootstrap method has been applied in order to determine efficiency rankings of Iberian Peninsula container ports taking into consideration different samples. Confidence interval estimates of efficiency for each port are reported. This paper provides useful insights into the application of a DEA bootstrap-based approach as a modeling tool to aid decision making in measuring port efficiency.

Details

Industrial Management & Data Systems, vol. 119 no. 4
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Book part
Publication date: 13 May 2017

Abstract

Details

Regression Discontinuity Designs
Type: Book
ISBN: 978-1-78714-390-6

To view the access options for this content please click here
Article
Publication date: 9 January 2009

Andrea Vocino

The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard…

Abstract

Purpose

The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard error parameter estimates. Using field data structured in the form of repeated measurements it is to be shown, in a two‐factor confirmatory factor analysis model, how the bias in SE can be derived when the non‐independence is ignored.

Design/methodology/approach

Three estimation procedures are compared: normal asymptotic theory (maximum likelihood); non‐parametric standard error estimation (naïve bootstrap); and sandwich (robust covariance matrix) estimation (pseudo‐maximum likelihood).

Findings

The study reveals that, when using either normal asymptotic theory or non‐parametric standard error estimation, the SE bias produced by the non‐independence of observations can be noteworthy.

Research limitations/implications

Considering the methodological constraints in employing field data, the three analyses examined must be interpreted independently and as a result taxonomic generalisations are limited. However, the study still provides “case study” evidence suggesting the existence of the relationship between non‐independence of observations and standard error bias estimates.

Originality/value

Given the increasing popularity of structural equation models in the social sciences and in particular in the marketing discipline, the paper provides a theoretical and practical insight into how to treat repeated measures and clustered data in general, adding to previous methodological research. Some conclusions and suggestions for researchers who make use of partial least squares modelling are also drawn.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 21 no. 1
Type: Research Article
ISSN: 1355-5855

Keywords

To view the access options for this content please click here
Article
Publication date: 18 February 2021

Wenguang Yang, Lianhai Lin and Hongkui Gao

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system…

Abstract

Purpose

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory. The purpose of this paper is to make full use of the difference of data distribution and avoid the marginal data being ignored.

Design/methodology/approach

Based upon the grey distribution characteristics of small sample data, the definition about a new concept of grey relational similarity measure comes into being. At the same time, the concept of sample weight is proposed according to the grey relational similarity measure. Based on the new definition of grey weight, the grey point estimation and grey confidence interval are studied. Then the improved Bootstrap resampling is designed by uniform distribution and randomness as an important supplement of the grey estimation. In addition, the accuracy of grey bilateral and unilateral confidence intervals is introduced by using the new grey relational similarity measure approach.

Findings

The new small sample evaluation method can realize the effective expansion and enrichment of data and avoid the excessive concentration of data. This method is an organic fusion of grey estimation and improved Bootstrap method. Several examples are used to demonstrate the feasibility and validity of the proposed methods to illustrate the credibility of some simulation data, which has no need to know the probability distribution of small samples.

Originality/value

This research has completed the combination of grey estimation and improved Bootstrap, which makes more reasonable use of the value of different data than the unimproved method.

Details

Grey Systems: Theory and Application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2043-9377

Keywords

To view the access options for this content please click here
Article
Publication date: 7 July 2020

Jiaming Liu, Liuan Wang, Linan Zhang, Zeming Zhang and Sicheng Zhang

The primary objective of this study was to recognize critical indicators in predicting blood glucose (BG) through data-driven methods and to compare the prediction…

Abstract

Purpose

The primary objective of this study was to recognize critical indicators in predicting blood glucose (BG) through data-driven methods and to compare the prediction performance of four tree-based ensemble models, i.e. bagging with tree regressors (bagging-decision tree [Bagging-DT]), AdaBoost with tree regressors (Adaboost-DT), random forest (RF) and gradient boosting decision tree (GBDT).

Design/methodology/approach

This study proposed a majority voting feature selection method by combining lasso regression with the Akaike information criterion (AIC) (LR-AIC), lasso regression with the Bayesian information criterion (BIC) (LR-BIC) and RF to select indicators with excellent predictive performance from initial 38 indicators in 5,642 samples. The selected features were deployed to build the tree-based ensemble models. The 10-fold cross-validation (CV) method was used to evaluate the performance of each ensemble model.

Findings

The results of feature selection indicated that age, corpuscular hemoglobin concentration (CHC), red blood cell volume distribution width (RBCVDW), red blood cell volume and leucocyte count are five most important clinical/physical indicators in BG prediction. Furthermore, this study also found that the GBDT ensemble model combined with the proposed majority voting feature selection method is better than other three models with respect to prediction performance and stability.

Practical implications

This study proposed a novel BG prediction framework for better predictive analytics in health care.

Social implications

This study incorporated medical background and machine learning technology to reduce diabetes morbidity and formulate precise medical schemes.

Originality/value

The majority voting feature selection method combined with the GBDT ensemble model provides an effective decision-making tool for predicting BG and detecting diabetes risk in advance.

To view the access options for this content please click here
Article
Publication date: 3 January 2017

Sunil Sahadev, Keyoor Purani and Tapan Kumar Panda

The purpose of this paper is to explore the relationships between managerial control strategies, role-stress and employee adaptiveness among call centre employees.

Abstract

Purpose

The purpose of this paper is to explore the relationships between managerial control strategies, role-stress and employee adaptiveness among call centre employees.

Design/methodology/approach

Based on a conceptual model, a questionnaire-based survey methodology is adopted. Data were collected from call centre employees in India and the data were analysed through PLS methodology.

Findings

The study finds that outcome control and activity control increase role-stress while capability control does not have a significant impact. The interaction between outcome control and activity control also tends to impact role-stress of employees. Role-stress felt by employees has significant negative impact on employee adaptiveness.

Research limitations/implications

The sampling approach was convenience based affecting the generalisability of the results.

Practical implications

The paper provides guidelines for utilising managerial control approaches in a service setting.

Originality/value

The paper looks at managerial control approaches in a service setting – a topic not quite researched before.

Details

Employee Relations, vol. 39 no. 1
Type: Research Article
ISSN: 0142-5455

Keywords

To view the access options for this content please click here
Book part
Publication date: 13 December 2013

Todd E. Clark and Michael W. McCracken

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is…

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

To view the access options for this content please click here
Book part
Publication date: 29 February 2008

Tae-Hwy Lee and Yang Yang

Bagging (bootstrap aggregating) is a smoothing method to improve predictive ability under the presence of parameter estimation uncertainty and model uncertainty. In Lee…

Abstract

Bagging (bootstrap aggregating) is a smoothing method to improve predictive ability under the presence of parameter estimation uncertainty and model uncertainty. In Lee and Yang (2006), we examined how (equal-weighted and BMA-weighted) bagging works for one-step-ahead binary prediction with an asymmetric cost function for time series, where we considered simple cases with particular choices of a linlin tick loss function and an algorithm to estimate a linear quantile regression model. In the present chapter, we examine how bagging predictors work with different aggregating (averaging) schemes, for multi-step forecast horizons, with a general class of tick loss functions, with different estimation algorithms, for nonlinear quantile regression models, and for different data frequencies. Bagging quantile predictors are constructed via (weighted) averaging over predictors trained on bootstrapped training samples, and bagging binary predictors are conducted via (majority) voting on predictors trained on the bootstrapped training samples. We find that median bagging and trimmed-mean bagging can alleviate the problem of extreme predictors from bootstrap samples and have better performance than equally weighted bagging predictors; that bagging works better at longer forecast horizons; that bagging works well with highly nonlinear quantile regression models (e.g., artificial neural network), and with general tick loss functions. We also find that the performance of bagging may be affected by using different quantile estimation algorithms (in small samples, even if the estimation is consistent) and by using different frequencies of time series data.

Details

Forecasting in the Presence of Structural Breaks and Model Uncertainty
Type: Book
ISBN: 978-1-84950-540-6

To view the access options for this content please click here
Article
Publication date: 9 February 2010

Rahul Srivatsa, Andrew Smith and Jon Lekander

The purpose of this paper is to develop a more robust methodology for asset allocation for the property investment market which takes into account inherent valuation and…

Abstract

Purpose

The purpose of this paper is to develop a more robust methodology for asset allocation for the property investment market which takes into account inherent valuation and data issues.

Design/methodology/approach

The methodology applied is that of a bootstrap, borrowed from Carlstein, and is applied to an investment universe consisting of UK equities, gilts and property. The bootstrap selectively re‐samples the return time series by maintaining the economic cycle. The resulting return series is then used in the standard mean‐variance optimisation (MVO) on an unconstrained basis. Finally, a “sanity” test is applied on the correlation matrix to ensure that spurious instances do not skew the results.

Findings

The bootstrapped optimisation provides a range within which the portfolio weights can be manoeuvred instead of a static point under the standard MVO. It provides a more robust methodology for asset allocation and without giving any undue significance to one year of extreme result.

Research limitations/implications

The current analysis is based on unconstrained portfolio optimisation, with a very limited investment universe. Additionally, by conforming with the MVO methodology, normality of asset returns is implicitly assumed, which is clearly not the case in the data used. Future work will also focus on an all‐property portfolio.

Practical implications

The proposed methodology will prove to be useful for making asset allocation decisions, particularly in turbulent financial markets.

Originality/value

The paper focuses solely on bootstrapping with the IPD UK annual index and is particularly significant after one year of extremely poor performance of UK property. The results will be of use to fund managers and portfolio analysts.

Details

Journal of Property Investment & Finance, vol. 28 no. 1
Type: Research Article
ISSN: 1463-578X

Keywords

1 – 10 of 118