Search results
1 – 10 of 186Otávio Bartalotti, Gray Calhoun and Yang He
This chapter develops a novel bootstrap procedure to obtain robust bias-corrected confidence intervals in regression discontinuity (RD) designs. The procedure uses a wild bootstrap…
Abstract
This chapter develops a novel bootstrap procedure to obtain robust bias-corrected confidence intervals in regression discontinuity (RD) designs. The procedure uses a wild bootstrap from a second-order local polynomial to estimate the bias of the local linear RD estimator; the bias is then subtracted from the original estimator. The bias-corrected estimator is then bootstrapped itself to generate valid confidence intervals (CIs). The CIs generated by this procedure are valid under conditions similar to Calonico, Cattaneo, and Titiunik’s (2014) analytical correction – that is, when the bias of the naive RD estimator would otherwise prevent valid inference. This chapter also provides simulation evidence that our method is as accurate as the analytical corrections and we demonstrate its use through a reanalysis of Ludwig and Miller’s (2007) Head Start dataset.
Details
Keywords
Antonio Gil Ropero, Ignacio Turias Dominguez and Maria del Mar Cerbán Jiménez
The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.
Abstract
Purpose
The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.
Design/methodology/approach
To achieve the above-mentioned objective, one possible method is to calculate the data envelopment analysis (DEA) efficiency, and the scale efficiency (SE) of targets, and in order to consider the variability across different samples, a bootstrap scheme has been applied.
Findings
The results showed that the DEA bootstrap-based approach can not only select a suitable unit which accords with a port’s actual input capabilities, but also provides a more accurate result. The bootstrapped results indicate that all ports do not need to develop future investments to expand port infrastructure.
Practical implications
The proposed DEA bootstrap-based approach provides useful implications in the robust measurement of port efficiency considering different samples. The study proves the usefulness of this approach as a decision-making tool in port efficiency.
Originality/value
This study is one of the first studies to apply bootstrap to measure port efficiency under the background of the Spain and Portugal case. In the first stage, two models of DEA have been used to obtain the pure technical, and the technical and SE, and both the input-oriented options: constant return scale and variable return scale. In the second stage, the bootstrap method has been applied in order to determine efficiency rankings of Iberian Peninsula container ports taking into consideration different samples. Confidence interval estimates of efficiency for each port are reported. This paper provides useful insights into the application of a DEA bootstrap-based approach as a modeling tool to aid decision making in measuring port efficiency.
Details
Keywords
The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard error…
Abstract
Purpose
The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non‐independence of observations on standard error parameter estimates. Using field data structured in the form of repeated measurements it is to be shown, in a two‐factor confirmatory factor analysis model, how the bias in SE can be derived when the non‐independence is ignored.
Design/methodology/approach
Three estimation procedures are compared: normal asymptotic theory (maximum likelihood); non‐parametric standard error estimation (naïve bootstrap); and sandwich (robust covariance matrix) estimation (pseudo‐maximum likelihood).
Findings
The study reveals that, when using either normal asymptotic theory or non‐parametric standard error estimation, the SE bias produced by the non‐independence of observations can be noteworthy.
Research limitations/implications
Considering the methodological constraints in employing field data, the three analyses examined must be interpreted independently and as a result taxonomic generalisations are limited. However, the study still provides “case study” evidence suggesting the existence of the relationship between non‐independence of observations and standard error bias estimates.
Originality/value
Given the increasing popularity of structural equation models in the social sciences and in particular in the marketing discipline, the paper provides a theoretical and practical insight into how to treat repeated measures and clustered data in general, adding to previous methodological research. Some conclusions and suggestions for researchers who make use of partial least squares modelling are also drawn.
Details
Keywords
Wenguang Yang, Lianhai Lin and Hongkui Gao
To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory…
Abstract
Purpose
To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory. The purpose of this paper is to make full use of the difference of data distribution and avoid the marginal data being ignored.
Design/methodology/approach
Based upon the grey distribution characteristics of small sample data, the definition about a new concept of grey relational similarity measure comes into being. At the same time, the concept of sample weight is proposed according to the grey relational similarity measure. Based on the new definition of grey weight, the grey point estimation and grey confidence interval are studied. Then the improved Bootstrap resampling is designed by uniform distribution and randomness as an important supplement of the grey estimation. In addition, the accuracy of grey bilateral and unilateral confidence intervals is introduced by using the new grey relational similarity measure approach.
Findings
The new small sample evaluation method can realize the effective expansion and enrichment of data and avoid the excessive concentration of data. This method is an organic fusion of grey estimation and improved Bootstrap method. Several examples are used to demonstrate the feasibility and validity of the proposed methods to illustrate the credibility of some simulation data, which has no need to know the probability distribution of small samples.
Originality/value
This research has completed the combination of grey estimation and improved Bootstrap, which makes more reasonable use of the value of different data than the unimproved method.
Details
Keywords
Jiaming Liu, Liuan Wang, Linan Zhang, Zeming Zhang and Sicheng Zhang
The primary objective of this study was to recognize critical indicators in predicting blood glucose (BG) through data-driven methods and to compare the prediction performance of…
Abstract
Purpose
The primary objective of this study was to recognize critical indicators in predicting blood glucose (BG) through data-driven methods and to compare the prediction performance of four tree-based ensemble models, i.e. bagging with tree regressors (bagging-decision tree [Bagging-DT]), AdaBoost with tree regressors (Adaboost-DT), random forest (RF) and gradient boosting decision tree (GBDT).
Design/methodology/approach
This study proposed a majority voting feature selection method by combining lasso regression with the Akaike information criterion (AIC) (LR-AIC), lasso regression with the Bayesian information criterion (BIC) (LR-BIC) and RF to select indicators with excellent predictive performance from initial 38 indicators in 5,642 samples. The selected features were deployed to build the tree-based ensemble models. The 10-fold cross-validation (CV) method was used to evaluate the performance of each ensemble model.
Findings
The results of feature selection indicated that age, corpuscular hemoglobin concentration (CHC), red blood cell volume distribution width (RBCVDW), red blood cell volume and leucocyte count are five most important clinical/physical indicators in BG prediction. Furthermore, this study also found that the GBDT ensemble model combined with the proposed majority voting feature selection method is better than other three models with respect to prediction performance and stability.
Practical implications
This study proposed a novel BG prediction framework for better predictive analytics in health care.
Social implications
This study incorporated medical background and machine learning technology to reduce diabetes morbidity and formulate precise medical schemes.
Originality/value
The majority voting feature selection method combined with the GBDT ensemble model provides an effective decision-making tool for predicting BG and detecting diabetes risk in advance.
Details
Keywords
Qinghua Xia, Yi Xie, Shuchuan Hu and Jianmin Song
Under extensive pressure from normal market competition, frequent technological change and extreme exogenous shock, firms are facing severe challenge nowadays. How to withstand…
Abstract
Purpose
Under extensive pressure from normal market competition, frequent technological change and extreme exogenous shock, firms are facing severe challenge nowadays. How to withstand discontinuous crises and respond to normal risks through improving resilience (RE) is an important question worth researching. Thus, drawing on the strategic entrepreneurship theory, the purpose of this study is exploring the relationship between entrepreneurial orientation (EO) and RE, and combining digitization to discuss the role of digital business capability (DBC), digital business model innovation (DBMI) and environmental hostility (EH).
Design/methodology/approach
Based on survey data from 203 Chinese firms, using the methods of linear regression and bootstrap to test our hypothesis. Furthermore, fuzzy-set qualitative comparative analysis (FsQCA) is used to identify previously unknown combinations which lead to strong/weak RE in digital context.
Findings
First, EO positively influenced DBC and RE. Second, DBMI promoted RE, DBC and DBMI served as sequential mediators that linked EO and RE. Third, EH positively moderated the effects of EO on RE. Further the study revealed that different configuration of DBMI and dimensions of EO and DBC can explain RE.
Originality/value
The study explains mechanism of RE from perspective of digitization. The conclusion is good for further consolidating strategic entrepreneurship theory, and providing a new frame for firms to build the ability of antifragile.
Details
Keywords
Sunil Sahadev, Keyoor Purani and Tapan Kumar Panda
The purpose of this paper is to explore the relationships between managerial control strategies, role-stress and employee adaptiveness among call centre employees.
Abstract
Purpose
The purpose of this paper is to explore the relationships between managerial control strategies, role-stress and employee adaptiveness among call centre employees.
Design/methodology/approach
Based on a conceptual model, a questionnaire-based survey methodology is adopted. Data were collected from call centre employees in India and the data were analysed through PLS methodology.
Findings
The study finds that outcome control and activity control increase role-stress while capability control does not have a significant impact. The interaction between outcome control and activity control also tends to impact role-stress of employees. Role-stress felt by employees has significant negative impact on employee adaptiveness.
Research limitations/implications
The sampling approach was convenience based affecting the generalisability of the results.
Practical implications
The paper provides guidelines for utilising managerial control approaches in a service setting.
Originality/value
The paper looks at managerial control approaches in a service setting – a topic not quite researched before.
Details
Keywords
Todd E. Clark and Michael W. McCracken
This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed…
Abstract
This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.
Details
Keywords
Tae-Hwy Lee and Yang Yang
Bagging (bootstrap aggregating) is a smoothing method to improve predictive ability under the presence of parameter estimation uncertainty and model uncertainty. In Lee and Yang…
Abstract
Bagging (bootstrap aggregating) is a smoothing method to improve predictive ability under the presence of parameter estimation uncertainty and model uncertainty. In Lee and Yang (2006), we examined how (equal-weighted and BMA-weighted) bagging works for one-step-ahead binary prediction with an asymmetric cost function for time series, where we considered simple cases with particular choices of a linlin tick loss function and an algorithm to estimate a linear quantile regression model. In the present chapter, we examine how bagging predictors work with different aggregating (averaging) schemes, for multi-step forecast horizons, with a general class of tick loss functions, with different estimation algorithms, for nonlinear quantile regression models, and for different data frequencies. Bagging quantile predictors are constructed via (weighted) averaging over predictors trained on bootstrapped training samples, and bagging binary predictors are conducted via (majority) voting on predictors trained on the bootstrapped training samples. We find that median bagging and trimmed-mean bagging can alleviate the problem of extreme predictors from bootstrap samples and have better performance than equally weighted bagging predictors; that bagging works better at longer forecast horizons; that bagging works well with highly nonlinear quantile regression models (e.g., artificial neural network), and with general tick loss functions. We also find that the performance of bagging may be affected by using different quantile estimation algorithms (in small samples, even if the estimation is consistent) and by using different frequencies of time series data.